datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
chathuranga-jayanath/context-5-finmath-times4j-html-mavendoxia-wro4j-guava-supercsv-len-1000-prompt-1
--- dataset_info: features: - name: id dtype: int64 - name: filepath dtype: string - name: start_bug_line dtype: int64 - name: end_bug_line dtype: int64 - name: bug dtype: string - name: fix dtype: string - name: ctx dtype: string splits: - name: train num_bytes: 3584332 num_examples: 5105 - name: validation num_bytes: 448298 num_examples: 637 - name: test num_bytes: 441485 num_examples: 637 download_size: 1770953 dataset_size: 4474115 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
amishshah/imbalanced_6
--- dataset_info: features: - name: title dtype: string - name: label dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 45166669.74 num_examples: 27000 - name: test num_bytes: 5018518.86 num_examples: 3000 download_size: 0 dataset_size: 50185188.6 --- # Dataset Card for "imbalanced_6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mitchcardoso/mitch
--- license: openrail ---
open-llm-leaderboard/details_hfl__chinese-alpaca-2-13b
--- pretty_name: Evaluation run of hfl/chinese-alpaca-2-13b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [hfl/chinese-alpaca-2-13b](https://huggingface.co/hfl/chinese-alpaca-2-13b) on\ \ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hfl__chinese-alpaca-2-13b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-12-09T16:00:55.681332](https://huggingface.co/datasets/open-llm-leaderboard/details_hfl__chinese-alpaca-2-13b/blob/main/results_2023-12-09T16-00-55.681332.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5502321350314341,\n\ \ \"acc_stderr\": 0.033838534455358144,\n \"acc_norm\": 0.5559937862519342,\n\ \ \"acc_norm_stderr\": 0.03456092398331123,\n \"mc1\": 0.34761321909424725,\n\ \ \"mc1_stderr\": 0.016670769188897303,\n \"mc2\": 0.5022258550236057,\n\ \ \"mc2_stderr\": 0.015284175194421176\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5418088737201365,\n \"acc_stderr\": 0.014560220308714697,\n\ \ \"acc_norm\": 0.5870307167235495,\n \"acc_norm_stderr\": 0.014388344935398329\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.59699263095001,\n \ \ \"acc_stderr\": 0.004894997736719051,\n \"acc_norm\": 0.7975502887870942,\n\ \ \"acc_norm_stderr\": 0.004010043978333125\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \ \ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n\ \ \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n\ \ \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.039993097127774734,\n\ \ \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.039993097127774734\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\ \ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \ \ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.5886792452830188,\n \"acc_stderr\": 0.030285009259009794,\n\ \ \"acc_norm\": 0.5886792452830188,\n \"acc_norm_stderr\": 0.030285009259009794\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5625,\n\ \ \"acc_stderr\": 0.04148415739394154,\n \"acc_norm\": 0.5625,\n \ \ \"acc_norm_stderr\": 0.04148415739394154\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \ \ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\ acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\"\ : 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \ \ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5260115606936416,\n\ \ \"acc_stderr\": 0.038073017265045125,\n \"acc_norm\": 0.5260115606936416,\n\ \ \"acc_norm_stderr\": 0.038073017265045125\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062946,\n\ \ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062946\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n\ \ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.37872340425531914,\n \"acc_stderr\": 0.03170995606040655,\n\ \ \"acc_norm\": 0.37872340425531914,\n \"acc_norm_stderr\": 0.03170995606040655\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n\ \ \"acc_stderr\": 0.042270544512322,\n \"acc_norm\": 0.2807017543859649,\n\ \ \"acc_norm_stderr\": 0.042270544512322\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\ \ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.32275132275132273,\n \"acc_stderr\": 0.024078943243597016,\n \"\ acc_norm\": 0.32275132275132273,\n \"acc_norm_stderr\": 0.024078943243597016\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3492063492063492,\n\ \ \"acc_stderr\": 0.04263906892795132,\n \"acc_norm\": 0.3492063492063492,\n\ \ \"acc_norm_stderr\": 0.04263906892795132\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \ \ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.667741935483871,\n\ \ \"acc_stderr\": 0.0267955608481228,\n \"acc_norm\": 0.667741935483871,\n\ \ \"acc_norm_stderr\": 0.0267955608481228\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.4482758620689655,\n \"acc_stderr\": 0.03499113137676744,\n\ \ \"acc_norm\": 0.4482758620689655,\n \"acc_norm_stderr\": 0.03499113137676744\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\"\ : 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.6606060606060606,\n \"acc_stderr\": 0.03697442205031596,\n\ \ \"acc_norm\": 0.6606060606060606,\n \"acc_norm_stderr\": 0.03697442205031596\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.696969696969697,\n \"acc_stderr\": 0.032742879140268674,\n \"\ acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.032742879140268674\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n\ \ \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.5102564102564102,\n \"acc_stderr\": 0.025345672221942374,\n\ \ \"acc_norm\": 0.5102564102564102,\n \"acc_norm_stderr\": 0.025345672221942374\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340492,\n \ \ \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340492\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.03206183783236152,\n \ \ \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.03206183783236152\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\ acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.7541284403669725,\n \"acc_stderr\": 0.01846194096870843,\n \"\ acc_norm\": 0.7541284403669725,\n \"acc_norm_stderr\": 0.01846194096870843\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\ : 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\ \ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7549019607843137,\n\ \ \"acc_stderr\": 0.030190282453501943,\n \"acc_norm\": 0.7549019607843137,\n\ \ \"acc_norm_stderr\": 0.030190282453501943\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\ : {\n \"acc\": 0.7088607594936709,\n \"acc_stderr\": 0.029571601065753374,\n\ \ \"acc_norm\": 0.7088607594936709,\n \"acc_norm_stderr\": 0.029571601065753374\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6278026905829597,\n\ \ \"acc_stderr\": 0.03244305283008731,\n \"acc_norm\": 0.6278026905829597,\n\ \ \"acc_norm_stderr\": 0.03244305283008731\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.6030534351145038,\n \"acc_stderr\": 0.04291135671009224,\n\ \ \"acc_norm\": 0.6030534351145038,\n \"acc_norm_stderr\": 0.04291135671009224\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"\ acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\ \ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.7407407407407407,\n\ \ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.6134969325153374,\n \"acc_stderr\": 0.038258255488486076,\n\ \ \"acc_norm\": 0.6134969325153374,\n \"acc_norm_stderr\": 0.038258255488486076\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n\ \ \"acc_stderr\": 0.0457237235873743,\n \"acc_norm\": 0.36607142857142855,\n\ \ \"acc_norm_stderr\": 0.0457237235873743\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.045416094465039476,\n\ \ \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.045416094465039476\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.811965811965812,\n\ \ \"acc_stderr\": 0.025598193686652244,\n \"acc_norm\": 0.811965811965812,\n\ \ \"acc_norm_stderr\": 0.025598193686652244\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465918,\n \ \ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465918\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7522349936143039,\n\ \ \"acc_stderr\": 0.015438083080568972,\n \"acc_norm\": 0.7522349936143039,\n\ \ \"acc_norm_stderr\": 0.015438083080568972\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.02607431485165708,\n\ \ \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.02607431485165708\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3865921787709497,\n\ \ \"acc_stderr\": 0.016286674879101026,\n \"acc_norm\": 0.3865921787709497,\n\ \ \"acc_norm_stderr\": 0.016286674879101026\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.5947712418300654,\n \"acc_stderr\": 0.028110928492809075,\n\ \ \"acc_norm\": 0.5947712418300654,\n \"acc_norm_stderr\": 0.028110928492809075\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6366559485530546,\n\ \ \"acc_stderr\": 0.02731684767419271,\n \"acc_norm\": 0.6366559485530546,\n\ \ \"acc_norm_stderr\": 0.02731684767419271\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.6018518518518519,\n \"acc_stderr\": 0.027237415094592474,\n\ \ \"acc_norm\": 0.6018518518518519,\n \"acc_norm_stderr\": 0.027237415094592474\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.40425531914893614,\n \"acc_stderr\": 0.029275532159704725,\n \ \ \"acc_norm\": 0.40425531914893614,\n \"acc_norm_stderr\": 0.029275532159704725\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4230769230769231,\n\ \ \"acc_stderr\": 0.01261820406658839,\n \"acc_norm\": 0.4230769230769231,\n\ \ \"acc_norm_stderr\": 0.01261820406658839\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.47794117647058826,\n \"acc_stderr\": 0.030343264224213535,\n\ \ \"acc_norm\": 0.47794117647058826,\n \"acc_norm_stderr\": 0.030343264224213535\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.5245098039215687,\n \"acc_stderr\": 0.020203517280261436,\n \ \ \"acc_norm\": 0.5245098039215687,\n \"acc_norm_stderr\": 0.020203517280261436\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n\ \ \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n\ \ \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.5959183673469388,\n \"acc_stderr\": 0.03141470802586589,\n\ \ \"acc_norm\": 0.5959183673469388,\n \"acc_norm_stderr\": 0.03141470802586589\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6865671641791045,\n\ \ \"acc_stderr\": 0.032801882053486456,\n \"acc_norm\": 0.6865671641791045,\n\ \ \"acc_norm_stderr\": 0.032801882053486456\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \ \ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.463855421686747,\n\ \ \"acc_stderr\": 0.03882310850890593,\n \"acc_norm\": 0.463855421686747,\n\ \ \"acc_norm_stderr\": 0.03882310850890593\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.7543859649122807,\n \"acc_stderr\": 0.03301405946987251,\n\ \ \"acc_norm\": 0.7543859649122807,\n \"acc_norm_stderr\": 0.03301405946987251\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.34761321909424725,\n\ \ \"mc1_stderr\": 0.016670769188897303,\n \"mc2\": 0.5022258550236057,\n\ \ \"mc2_stderr\": 0.015284175194421176\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7561168113654302,\n \"acc_stderr\": 0.012068923278908189\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.25018953752843065,\n \ \ \"acc_stderr\": 0.011930334350873352\n }\n}\n```" repo_url: https://huggingface.co/hfl/chinese-alpaca-2-13b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|arc:challenge|25_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-12-09T16-00-55.681332.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|gsm8k|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hellaswag|10_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-management|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-management|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-00-55.681332.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-management|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-00-55.681332.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|truthfulqa:mc|0_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-12-09T16-00-55.681332.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_12_09T16_00_55.681332 path: - '**/details_harness|winogrande|5_2023-12-09T16-00-55.681332.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-12-09T16-00-55.681332.parquet' - config_name: results data_files: - split: 2023_12_09T16_00_55.681332 path: - results_2023-12-09T16-00-55.681332.parquet - split: latest path: - results_2023-12-09T16-00-55.681332.parquet --- # Dataset Card for Evaluation run of hfl/chinese-alpaca-2-13b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/hfl/chinese-alpaca-2-13b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [hfl/chinese-alpaca-2-13b](https://huggingface.co/hfl/chinese-alpaca-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_hfl__chinese-alpaca-2-13b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-09T16:00:55.681332](https://huggingface.co/datasets/open-llm-leaderboard/details_hfl__chinese-alpaca-2-13b/blob/main/results_2023-12-09T16-00-55.681332.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5502321350314341, "acc_stderr": 0.033838534455358144, "acc_norm": 0.5559937862519342, "acc_norm_stderr": 0.03456092398331123, "mc1": 0.34761321909424725, "mc1_stderr": 0.016670769188897303, "mc2": 0.5022258550236057, "mc2_stderr": 0.015284175194421176 }, "harness|arc:challenge|25": { "acc": 0.5418088737201365, "acc_stderr": 0.014560220308714697, "acc_norm": 0.5870307167235495, "acc_norm_stderr": 0.014388344935398329 }, "harness|hellaswag|10": { "acc": 0.59699263095001, "acc_stderr": 0.004894997736719051, "acc_norm": 0.7975502887870942, "acc_norm_stderr": 0.004010043978333125 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.04461960433384741, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4888888888888889, "acc_stderr": 0.04318275491977976, "acc_norm": 0.4888888888888889, "acc_norm_stderr": 0.04318275491977976 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5921052631578947, "acc_stderr": 0.039993097127774734, "acc_norm": 0.5921052631578947, "acc_norm_stderr": 0.039993097127774734 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5886792452830188, "acc_stderr": 0.030285009259009794, "acc_norm": 0.5886792452830188, "acc_norm_stderr": 0.030285009259009794 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5625, "acc_stderr": 0.04148415739394154, "acc_norm": 0.5625, "acc_norm_stderr": 0.04148415739394154 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5260115606936416, "acc_stderr": 0.038073017265045125, "acc_norm": 0.5260115606936416, "acc_norm_stderr": 0.038073017265045125 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.35294117647058826, "acc_stderr": 0.04755129616062946, "acc_norm": 0.35294117647058826, "acc_norm_stderr": 0.04755129616062946 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.37872340425531914, "acc_stderr": 0.03170995606040655, "acc_norm": 0.37872340425531914, "acc_norm_stderr": 0.03170995606040655 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2807017543859649, "acc_stderr": 0.042270544512322, "acc_norm": 0.2807017543859649, "acc_norm_stderr": 0.042270544512322 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5448275862068965, "acc_stderr": 0.04149886942192117, "acc_norm": 0.5448275862068965, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.32275132275132273, "acc_stderr": 0.024078943243597016, "acc_norm": 0.32275132275132273, "acc_norm_stderr": 0.024078943243597016 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3492063492063492, "acc_stderr": 0.04263906892795132, "acc_norm": 0.3492063492063492, "acc_norm_stderr": 0.04263906892795132 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.667741935483871, "acc_stderr": 0.0267955608481228, "acc_norm": 0.667741935483871, "acc_norm_stderr": 0.0267955608481228 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4482758620689655, "acc_stderr": 0.03499113137676744, "acc_norm": 0.4482758620689655, "acc_norm_stderr": 0.03499113137676744 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6606060606060606, "acc_stderr": 0.03697442205031596, "acc_norm": 0.6606060606060606, "acc_norm_stderr": 0.03697442205031596 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.696969696969697, "acc_stderr": 0.032742879140268674, "acc_norm": 0.696969696969697, "acc_norm_stderr": 0.032742879140268674 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7979274611398963, "acc_stderr": 0.02897908979429673, "acc_norm": 0.7979274611398963, "acc_norm_stderr": 0.02897908979429673 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5102564102564102, "acc_stderr": 0.025345672221942374, "acc_norm": 0.5102564102564102, "acc_norm_stderr": 0.025345672221942374 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2851851851851852, "acc_stderr": 0.027528599210340492, "acc_norm": 0.2851851851851852, "acc_norm_stderr": 0.027528599210340492 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5798319327731093, "acc_stderr": 0.03206183783236152, "acc_norm": 0.5798319327731093, "acc_norm_stderr": 0.03206183783236152 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7541284403669725, "acc_stderr": 0.01846194096870843, "acc_norm": 0.7541284403669725, "acc_norm_stderr": 0.01846194096870843 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4722222222222222, "acc_stderr": 0.0340470532865388, "acc_norm": 0.4722222222222222, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7549019607843137, "acc_stderr": 0.030190282453501943, "acc_norm": 0.7549019607843137, "acc_norm_stderr": 0.030190282453501943 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7088607594936709, "acc_stderr": 0.029571601065753374, "acc_norm": 0.7088607594936709, "acc_norm_stderr": 0.029571601065753374 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6278026905829597, "acc_stderr": 0.03244305283008731, "acc_norm": 0.6278026905829597, "acc_norm_stderr": 0.03244305283008731 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6030534351145038, "acc_stderr": 0.04291135671009224, "acc_norm": 0.6030534351145038, "acc_norm_stderr": 0.04291135671009224 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7355371900826446, "acc_stderr": 0.04026187527591205, "acc_norm": 0.7355371900826446, "acc_norm_stderr": 0.04026187527591205 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7407407407407407, "acc_stderr": 0.042365112580946336, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6134969325153374, "acc_stderr": 0.038258255488486076, "acc_norm": 0.6134969325153374, "acc_norm_stderr": 0.038258255488486076 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.36607142857142855, "acc_stderr": 0.0457237235873743, "acc_norm": 0.36607142857142855, "acc_norm_stderr": 0.0457237235873743 }, "harness|hendrycksTest-management|5": { "acc": 0.6990291262135923, "acc_stderr": 0.045416094465039476, "acc_norm": 0.6990291262135923, "acc_norm_stderr": 0.045416094465039476 }, "harness|hendrycksTest-marketing|5": { "acc": 0.811965811965812, "acc_stderr": 0.025598193686652244, "acc_norm": 0.811965811965812, "acc_norm_stderr": 0.025598193686652244 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.53, "acc_stderr": 0.05016135580465918, "acc_norm": 0.53, "acc_norm_stderr": 0.05016135580465918 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7522349936143039, "acc_stderr": 0.015438083080568972, "acc_norm": 0.7522349936143039, "acc_norm_stderr": 0.015438083080568972 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6242774566473989, "acc_stderr": 0.02607431485165708, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.02607431485165708 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3865921787709497, "acc_stderr": 0.016286674879101026, "acc_norm": 0.3865921787709497, "acc_norm_stderr": 0.016286674879101026 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.5947712418300654, "acc_stderr": 0.028110928492809075, "acc_norm": 0.5947712418300654, "acc_norm_stderr": 0.028110928492809075 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6366559485530546, "acc_stderr": 0.02731684767419271, "acc_norm": 0.6366559485530546, "acc_norm_stderr": 0.02731684767419271 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6018518518518519, "acc_stderr": 0.027237415094592474, "acc_norm": 0.6018518518518519, "acc_norm_stderr": 0.027237415094592474 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.40425531914893614, "acc_stderr": 0.029275532159704725, "acc_norm": 0.40425531914893614, "acc_norm_stderr": 0.029275532159704725 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4230769230769231, "acc_stderr": 0.01261820406658839, "acc_norm": 0.4230769230769231, "acc_norm_stderr": 0.01261820406658839 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.47794117647058826, "acc_stderr": 0.030343264224213535, "acc_norm": 0.47794117647058826, "acc_norm_stderr": 0.030343264224213535 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5245098039215687, "acc_stderr": 0.020203517280261436, "acc_norm": 0.5245098039215687, "acc_norm_stderr": 0.020203517280261436 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.046737523336702384, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.046737523336702384 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5959183673469388, "acc_stderr": 0.03141470802586589, "acc_norm": 0.5959183673469388, "acc_norm_stderr": 0.03141470802586589 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6865671641791045, "acc_stderr": 0.032801882053486456, "acc_norm": 0.6865671641791045, "acc_norm_stderr": 0.032801882053486456 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.038612291966536934, "acc_norm": 0.82, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-virology|5": { "acc": 0.463855421686747, "acc_stderr": 0.03882310850890593, "acc_norm": 0.463855421686747, "acc_norm_stderr": 0.03882310850890593 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7543859649122807, "acc_stderr": 0.03301405946987251, "acc_norm": 0.7543859649122807, "acc_norm_stderr": 0.03301405946987251 }, "harness|truthfulqa:mc|0": { "mc1": 0.34761321909424725, "mc1_stderr": 0.016670769188897303, "mc2": 0.5022258550236057, "mc2_stderr": 0.015284175194421176 }, "harness|winogrande|5": { "acc": 0.7561168113654302, "acc_stderr": 0.012068923278908189 }, "harness|gsm8k|5": { "acc": 0.25018953752843065, "acc_stderr": 0.011930334350873352 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
roupenminassian/twitter-misinformation
--- task_categories: - text-classification --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Zhangzhe197/HupuJudge
--- license: apache-2.0 --- 虎扑评分数据, 共3.5M
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935741
--- type: predictions tags: - autotrain - evaluation datasets: - xsum eval_info: task: summarization model: google/pegasus-large metrics: ['bleu'] dataset_name: xsum dataset_config: default dataset_split: test col_mapping: text: document target: summary --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-large * Dataset: xsum * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model.
one-sec-cv12/chunk_225
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 splits: - name: train num_bytes: 21203076240.625 num_examples: 220755 download_size: 20183788050 dataset_size: 21203076240.625 --- # Dataset Card for "chunk_225" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
9wimu9/sinhala_30m
--- dataset_info: features: - name: id dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 7579499129.883565 num_examples: 30600246 - name: test num_bytes: 421083367.55821764 num_examples: 1700014 - name: valid num_bytes: 421083367.55821764 num_examples: 1700014 download_size: 4095088923 dataset_size: 8421665865.000001 --- # Dataset Card for "sinhala_30m" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/royal_fortune_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of royal_fortune/ロイヤル・フォーチュン/皇家财富号 (Azur Lane) This is the dataset of royal_fortune/ロイヤル・フォーチュン/皇家财富号 (Azur Lane), containing 72 images and their tags. The core tags of this character are `red_eyes, breasts, long_hair, large_breasts, bangs, black_hair, brown_hair, hat, horns`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 72 | 135.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/royal_fortune_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 72 | 62.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/royal_fortune_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 189 | 141.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/royal_fortune_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 72 | 111.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/royal_fortune_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 189 | 213.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/royal_fortune_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/royal_fortune_azurlane', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, cleavage, looking_at_viewer, red_bikini, solo, blush, navel, smile, open_mouth, simple_background, white_background, bare_shoulders, skindentation, tentacles, thighs, huge_breasts | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, smile, tentacles, looking_at_viewer, pirate_hat, torn_pantyhose, bare_shoulders, black_jacket, open_mouth, red_skirt, shirt, sideboob, red_ascot, boots | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, cleavage, demon_horns, solo, looking_at_viewer, demon_girl, hair_bow, black_thighhighs, demon_tail, demon_wings, navel, apron, miniskirt, open_mouth, black_skirt, blush, frills, pleated_skirt, ribbon, thighs, white_bow, stomach, :d, fang, halloween, tentacles, white_bikini | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | looking_at_viewer | red_bikini | solo | blush | navel | smile | open_mouth | simple_background | white_background | bare_shoulders | skindentation | tentacles | thighs | huge_breasts | pirate_hat | torn_pantyhose | black_jacket | red_skirt | shirt | sideboob | red_ascot | boots | demon_horns | demon_girl | hair_bow | black_thighhighs | demon_tail | demon_wings | apron | miniskirt | black_skirt | frills | pleated_skirt | ribbon | white_bow | stomach | :d | fang | halloween | white_bikini | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:-------------|:-------|:--------|:--------|:--------|:-------------|:--------------------|:-------------------|:-----------------|:----------------|:------------|:---------|:---------------|:-------------|:-----------------|:---------------|:------------|:--------|:-----------|:------------|:--------|:--------------|:-------------|:-----------|:-------------------|:-------------|:--------------|:--------|:------------|:--------------|:---------|:----------------|:---------|:------------|:----------|:-----|:-------|:------------|:---------------| | 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 22 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | X | | | X | X | | | X | | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | X | X | | X | | | | | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
HSL-MJ/llm60
--- license: apache-2.0 ---
innermost47/alpaca-fr
--- license: cc-by-nc-4.0 task_categories: - text-generation language: - fr ---
zolak/twitter_dataset_1713027364
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: float64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 238771011 num_examples: 631480 download_size: 123159913 dataset_size: 238771011 configs: - config_name: default data_files: - split: train path: data/train-* ---
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-32000
--- dataset_info: features: - name: input_ids sequence: sequence: int32 - name: attention_mask sequence: sequence: int8 - name: labels sequence: sequence: int64 splits: - name: train num_bytes: 13336000 num_examples: 1000 download_size: 1063930 dataset_size: 13336000 configs: - config_name: default data_files: - split: train path: data/train-* ---
FredZhang7/krea-ai-prompts
--- license: mit --- See [DistilGPT2 Stable Diffusion](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion)
Ineract/policies-named-insured
--- dataset_info: features: - name: id dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 config_name: plain_text splits: - name: train num_bytes: 3245009 num_examples: 7632 - name: test num_bytes: 359230 num_examples: 849 download_size: 5007313 dataset_size: 3604239 ---
Augusto777/dmae-da-U4
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': avanzada '1': leve '2': moderada '3': no dmae splits: - name: train num_bytes: 7599070.0 num_examples: 974 - name: test num_bytes: 17532936.0 num_examples: 51 - name: validation num_bytes: 18989054.0 num_examples: 51 download_size: 43682919 dataset_size: 44121060.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* ---
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-101000
--- dataset_info: features: - name: input_ids sequence: sequence: int32 - name: attention_mask sequence: sequence: int8 - name: labels sequence: sequence: int64 splits: - name: train num_bytes: 13336000 num_examples: 1000 download_size: 1014209 dataset_size: 13336000 configs: - config_name: default data_files: - split: train path: data/train-* ---
liuyanchen1015/MULTI_VALUE_qqp_referential_thing
--- dataset_info: features: - name: question1 dtype: string - name: question2 dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 1047408 num_examples: 5445 - name: test num_bytes: 10522865 num_examples: 54642 - name: train num_bytes: 9461135 num_examples: 48599 download_size: 12824499 dataset_size: 21031408 --- # Dataset Card for "MULTI_VALUE_qqp_referential_thing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bcijo/Gold-alpaca-legal
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: Instruction dtype: string - name: Input dtype: string - name: Output dtype: string - name: Text dtype: string splits: - name: train num_bytes: 1466796 num_examples: 127 download_size: 664187 dataset_size: 1466796 --- # Dataset Card for "Gold-alpaca-legal" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joey234/mmlu-logical_fallacies
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: negate_openai_prompt struct: - name: content dtype: string - name: role dtype: string - name: neg_question dtype: string - name: fewshot_context dtype: string - name: fewshot_context_neg dtype: string splits: - name: dev num_bytes: 4849 num_examples: 5 - name: test num_bytes: 667378 num_examples: 163 download_size: 83799 dataset_size: 672227 configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* --- # Dataset Card for "mmlu-logical_fallacies" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Lancelot457357/Napoleon_Bonaparte_voice_data
--- license: cc ---
AiHevenpen/proj
--- license: mit ---
CyberHarem/ferry_granbluefantasy
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of ferry/フェリ (Granblue Fantasy) This is the dataset of ferry/フェリ (Granblue Fantasy), containing 500 images and their tags. The core tags of this character are `animal_ears, blue_hair, long_hair, breasts, wavy_hair, earrings, yellow_eyes, rabbit_ears, bangs, medium_breasts, single_earring, hair_between_eyes, brown_eyes`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 769.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ferry_granbluefantasy/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 429.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ferry_granbluefantasy/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1242 | 924.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ferry_granbluefantasy/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 675.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ferry_granbluefantasy/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1242 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/ferry_granbluefantasy/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/ferry_granbluefantasy', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blush, erune, looking_at_viewer, sideboob, simple_background, solo, armpits, black_gloves, elbow_gloves, arms_up, bare_shoulders, jewelry, upper_body, white_background, x_hair_ornament, black_dress, covered_navel, backless_outfit, closed_mouth | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_dress, black_gloves, erune, looking_at_viewer, sideboob, solo, x_hair_ornament, elbow_gloves, holding_whip, blush, cape, jewelry, armpits, bare_shoulders, black_thighhighs, closed_mouth, simple_background, smile, very_long_hair, arm_up, sleeveless_dress, weapon, white_background | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, backless_outfit, bare_shoulders, black_gloves, blue_skirt, dress, erune, looking_at_viewer, open_mouth, sideboob, solo, frills, holding_whip, jewelry, sleeveless, brown_thighhighs, ghost, armpits, brown_gloves | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, bare_shoulders, black_gloves, boots, brown_thighhighs, erune, holding, looking_at_viewer, sideboob, solo, black_footwear, blue_skirt, dress, open_mouth, simple_background, whip, white_background, backless_outfit, full_body, jewelry, black_thighhighs | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, bare_back, bare_shoulders, erune, looking_at_viewer, looking_back, sideboob, solo, black_gloves, from_behind, backless_dress, blue_skirt, jewelry, thighhighs, weapon, arm_up, blush, ghost, holding_whip, open_mouth, small_breasts | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, ass, black_gloves, blue_skirt, erune, from_behind, looking_at_viewer, looking_back, sideboob, solo, bare_back, bare_shoulders, black_thighhighs, blush, jewelry, simple_background, backless_dress, black_panties, small_breasts, white_background, armpits, holding_whip, thighs | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, bare_shoulders, black_gloves, detached_sleeves, erune, looking_at_viewer, red_dress, solo, underboob, official_alternate_costume, simple_background, white_background, hair_bow, small_breasts, smile, blush, fur-trimmed_sleeves, hand_up, jewelry, long_sleeves, open_mouth, very_long_hair, wide_sleeves | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, erune, holding, open_mouth, solo, bare_shoulders, black_skirt, blush, ghost, hair_flower, high-waist_skirt, looking_at_viewer, sleeveless_shirt, white_shirt, small_breasts, :d, ;d, bare_arms, book, full_body, jewelry, one_eye_closed, petals, simple_background, standing, white_background | | 8 | 28 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, erune, hair_flower, solo, looking_at_viewer, official_alternate_costume, jewelry, navel, blush, ponytail, smile, blue_skirt, cleavage, bare_shoulders, white_bikini, bikini_skirt, hair_ribbon, open_mouth, blue_flower, white_background, simple_background | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, blue_sky, blush, cloud, day, erune, jewelry, looking_at_viewer, solo, armpits, blue_bikini, navel, outdoors, small_breasts, smile, very_long_hair, cleavage, ocean, open_mouth, water, arms_behind_head, arms_up, bare_shoulders, one_eye_closed, stomach, thighs, wading | | 10 | 11 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1boy, 1girl, blush, erune, hetero, penis, solo_focus, nipples, open_mouth, sex, vaginal, nude, small_breasts, sweat, jewelry, cum_in_pussy, navel, spread_legs, thighhighs, bar_censor, black_gloves, uncensored | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | erune | looking_at_viewer | sideboob | simple_background | solo | armpits | black_gloves | elbow_gloves | arms_up | bare_shoulders | jewelry | upper_body | white_background | x_hair_ornament | black_dress | covered_navel | backless_outfit | closed_mouth | holding_whip | cape | black_thighhighs | smile | very_long_hair | arm_up | sleeveless_dress | weapon | blue_skirt | dress | open_mouth | frills | sleeveless | brown_thighhighs | ghost | brown_gloves | boots | holding | black_footwear | whip | full_body | bare_back | looking_back | from_behind | backless_dress | thighhighs | small_breasts | ass | black_panties | thighs | detached_sleeves | red_dress | underboob | official_alternate_costume | hair_bow | fur-trimmed_sleeves | hand_up | long_sleeves | wide_sleeves | black_skirt | hair_flower | high-waist_skirt | sleeveless_shirt | white_shirt | :d | ;d | bare_arms | book | one_eye_closed | petals | standing | navel | ponytail | cleavage | white_bikini | bikini_skirt | hair_ribbon | blue_flower | blue_sky | cloud | day | blue_bikini | outdoors | ocean | water | arms_behind_head | stomach | wading | 1boy | hetero | penis | solo_focus | nipples | sex | vaginal | nude | sweat | cum_in_pussy | spread_legs | bar_censor | uncensored | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------|:--------|:--------------------|:-----------|:--------------------|:-------|:----------|:---------------|:---------------|:----------|:-----------------|:----------|:-------------|:-------------------|:------------------|:--------------|:----------------|:------------------|:---------------|:---------------|:-------|:-------------------|:--------|:-----------------|:---------|:-------------------|:---------|:-------------|:--------|:-------------|:---------|:-------------|:-------------------|:--------|:---------------|:--------|:----------|:-----------------|:-------|:------------|:------------|:---------------|:--------------|:-----------------|:-------------|:----------------|:------|:----------------|:---------|:-------------------|:------------|:------------|:-----------------------------|:-----------|:----------------------|:----------|:---------------|:---------------|:--------------|:--------------|:-------------------|:-------------------|:--------------|:-----|:-----|:------------|:-------|:-----------------|:---------|:-----------|:--------|:-----------|:-----------|:---------------|:---------------|:--------------|:--------------|:-----------|:--------|:------|:--------------|:-----------|:--------|:--------|:-------------------|:----------|:---------|:-------|:---------|:--------|:-------------|:----------|:------|:----------|:-------|:--------|:---------------|:--------------|:-------------|:-------------| | 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | X | X | | X | X | | X | X | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | X | | X | X | X | | | X | X | | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | X | X | X | X | X | | X | | | X | X | | X | | | | X | | | | X | | | | | | X | X | X | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | X | X | X | | X | | X | | | X | X | | | | | | | | X | | | | | X | | X | X | | X | | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 6 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | X | X | X | X | X | X | X | | | X | X | | X | | | | | | X | | X | | | | | | X | | | | | | | | | | | | | X | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 6 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | X | | X | X | | X | | | X | X | | X | | | | | | | | | X | X | | | | | | X | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | X | | X | X | | | | | X | X | | X | | | | | | | | | | | | | | | | X | | | | X | | | X | | | X | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 28 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | X | X | | X | X | | | | | X | X | | X | | | | | | | | | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 5 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | X | X | | | X | X | | | X | X | X | | | | | | | | | | | X | X | | | | | | X | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | X | | | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 10 | 11 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | X | X | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
maidalun1020/CrosslingualRetrievalOthersZh2En
--- license: apache-2.0 configs: - config_name: default data_files: - split: queries path: data/queries-* - split: corpus path: data/corpus-* dataset_info: features: - name: id dtype: string - name: text dtype: string splits: - name: queries num_bytes: 4629431 num_examples: 22963 - name: corpus num_bytes: 4699691 num_examples: 3918 download_size: 6356941 dataset_size: 9329122 ---
thebogko/bulgarian-spelling-mistakes
--- license: apache-2.0 task_categories: - text2text-generation language: - bg pretty_name: Bulgarian Spelling Mistakes size_categories: - 10K<n<100K --- # Dataset of Bulgarian Spelling Mistakes ## Table of Contents - [Dataset of Bulgarian Spelling Mistakes](#dataset-of-bulgarian-spelling-mistakes) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Introducing Spelling Errors](#introducing-spelling-errors) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) <!-- - [Dataset Curators](#dataset-curators)--> - [Licensing Information](#licensing-information) <!-- - [Citation Information](#citation-information) - [Contributions](#contributions)--> ## Dataset Description <!--- **Paper:** - **Point of Contact:** --> ### Dataset Summary This is a dataset of sentences in Bulgarian with spelling mistakes created by automatically inducing errors in correct sentences. ### Supported Tasks - `text2text-generation`: The dataset can be used to train a model for spelling error correction, which consists in correction of spelling errors. in a source sentence, resulting in a correct version. ### Languages - `bg`: Only Bulgarian is supported by this dataset. ## Dataset Structure ### Data Instances Each instance contains an `error_type`, which can be one of four pre-defined classes. This error_type describes the error found in the sequence in `erroeneous` which has been corrected in the sequence under `correct`. ``` { 'error_type': 'article_misuse', 'erroeneous': 'Възстанието влияе на всички ни!', 'correct': 'Въстанието влияе на всички ни!' } ``` ### Data Fields - `error_type`: a string sequence that can be one of: - **vowel_change** - **double_consonant** - **end_of_lemma_consonant** - **double_t_or_n** - **loss_of_t_or_d_sound** - **random_char** - **semantic** - `erroeneous`: a string sequence of the erroneous sentence containing up to three errors in the error-correct pair - `correct`: a string sequence of the correct sentence in the error-correct pair ### Data Splits No pre-defined split has been applied to the dataset so the developer has the freedom to choose one that suits the task. ## Dataset Creation ### Curation Rationale Main motivations for the creating of this dataset: - resources for error correction NLP systems in Bulgarian are scarce, so this dataset aims to enocourage the development and evaluation of more such systems, - solutions within Bulgarian NLP are traditional machine learning methods, so a dataset like this aims to encourage the development of state-of-the-art models (e.g. deep learning approaches) ### Source Data The source data for this dataset has been collected from Bulgarian Wikipedia articles. ### Initial Data Collection and Normalization The data collection process was such: 1) Data collection: Articles from Bulgarian Wikipedia were collected using Wikipedia's API. 2) The source texts underwent POS tagging and sentence segmentation using the tool from the work of [(Berbatova M., Ivanov F.,2023)](https://doi.org/10.60063/gsu.fmi.110.37-50). 3) Only sentences that (1) have three or more words, and (2) contain a token tagged as a verb were kept, as many of the sentences were simply article titles, links and other textual data we weren't interested in. ### Introducing Spelling Errors The dataset has been created by introduced spelling errors in correct sentences. Errors of the pre-defined types were induced using Python scripts, taking into account the error's nature, the algorithm is outlined as such: - Take a source sentence, which would be kept as the reference. - Induce an error of the pre-defined types, if that is possible (some sentences do not contain necessary pre-requisites for certain error types e.g. a relative pronoun is necessary to introduce an error of that type) - Pair up the correct and erroneous versions These errors have only been applied to lemmas: - larger than three characters, because shorter words tend to be functional words, and - not containg capitalised letters, on the assumption that it might be a named entity The resulting erroneous sentences were created with four different types of changes, reflected in the error_type column. In the examples below the first sequence is the **CORRECT** form, and the resulting sequence is **ERRONEOUS**. 1) **vowel_change**: a vowel that is not under stress is changed to its twin sounding vowel (e.g. 'кръг**ъ**л' -> 'кръг**а**л') 2) **double_consonant**: when two neighbouring consonants with differing sound strengths change the first one to match the second ('постано**в**ка' -> 'постано**ф**ка') 3) **end_of_lemma_consonant**: changing a strong sounding consonant to its weak sounding twin if at the end of a lemma ('маси**в**' -> 'маси**ф**') 4) **double_t_or_n**: removing one of the 'н' or 'т' letters when two of the same kind are found neighbouring, often in suffixes ('пролет**т**а' -> пролета, един**н**о -> едино) 5) **loss_of_t_or_d_sound**: removing either 'т' or 'д' if found in a lemma where the sound of the removed letter is lost ('вес**т**ник' -> 'весник', 'звез**д**на' -> звезна) 6) **random_char**: replacing one random character in a lemma with another random character ('мо**м**иче' -> 'мо**г**иче') 7) **semantic**: replacing, removing, adding a character or swapping two characters of a lemma such that the resulting word is a valid word syntactically, but the sentence is no longer semantically coherent ('Момчето, което обичам.' -> 'Момчето, ко**т**ето обичам.') ### Personal and Sensitive Information The source of this dataset is open-source data collections (Wikipedia) and carry the same amount of risk of personal and/or sensitive information as they do. ## Considerations for Using the Data ### Social Impact of Dataset A dataset like this can be beneficial for language learners and developers in the error correction community. ### Discussion of Biases The `error_type` classes are not distributed equally, as some errors are more common than others. It's important the developer utlising this dataset is aware of this as to not create error correction system/evaluations that are biased. ### Other Known Limitations There are many spelling errors not covered by this dataset, as it's the first of its kind. Hopefully, it encourages people to create more datasets like this and models that utilise them. ## Additional Information <!-- ### Dataset Curators List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here. --> ### Licensing Information The license of the dataset is [apache2.0](https://www.apache.org/licenses/LICENSE-2.0). <!-- ### Citation Information Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example: ``` @article{article_id, author = {Author List}, title = {Dataset Paper Title}, journal = {Publication Venue}, year = {2525} } ``` If the dataset has a [DOI](https://www.doi.org/), please provide it here. ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
lecslab/guarani
--- dataset_info: features: - name: glottocode dtype: string - name: metalang_glottocode dtype: string - name: is_segmented dtype: string - name: source dtype: string - name: id dtype: string - name: transcription dtype: string - name: glosses dtype: string - name: translation dtype: string splits: - name: train num_bytes: 395887 num_examples: 1606 download_size: 116792 dataset_size: 395887 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "guarani" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
microsoft/orca-math-word-problems-200k
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 225322861 num_examples: 200035 download_size: 84248748 dataset_size: 225322861 configs: - config_name: default data_files: - split: train path: data/train-* license: mit task_categories: - question-answering language: - en tags: - math size_categories: - 100K<n<1M --- # Dataset Card <!-- Provide a quick summary of the dataset. --> This dataset contains ~200K grade school math word problems. All the answers in this dataset is generated using Azure GPT4-Turbo. Please refer to [Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf) for details about the dataset construction. ### Dataset Description - **Curated by:** Microsoft - **Language(s) (NLP):** English - **License:** MIT ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) - **Paper:** [Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf) ### Direct Use <!-- This section describes suitable use cases for the dataset. --> This dataset has been designed to enhance the mathematical abilities of language models. It aims to provide a robust foundation for language models to excel in mathematical problem-solving. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> This dataset is not intended for use in educational systems or organizations. ## Dataset Structure ### Data Instances A typical data entry in the dataset consists of a question and its corresponding answer. Below is an example from the dataset: ```python {'question': 'In a highly contested election having multiple candidates, Mr. Jackson, one of the losing candidates, received 3,485,782 votes, which accounted for precisely 38.7 percent of all votes. To have achieved a victory, he would have needed to secure at least 51 percent of all votes. Approximately, what percent of the remaining unsecured votes would Mr. Jackson have needed to accumulate to reach this victory threshold?', 'answer': "First, let's find out the total number of votes cast in the election. Since Mr. Jackson received 38.7% of all votes, and that amounted to 3,485,782 votes, we can set up the following equation to find the total number of votes (T):\n\n0.387 * T = 3,485,782\n\nNow, solve for T:\n\nT = 3,485,782 / 0.387\nT ≈ 9,000,467 votes (total number of votes cast)\n\nTo win, Mr. Jackson would have needed 51% of the total votes. Let's calculate that amount:\n\n0.51 * T = 0.51 * 9,000,467\n0.51 * T ≈ 4,590,238 votes needed to win\n\nNow, let's find out how many more votes Mr. Jackson needed to reach this winning threshold:\n\nVotes needed to win - Votes Mr. Jackson received = Additional votes needed\n4,590,238 - 3,485,782 = 1,104,456 additional votes needed\n\nNow, let's find out what percentage of the remaining unsecured votes this number represents. The remaining unsecured votes are the votes that were not for Mr. Jackson, which is 100% - 38.7% = 61.3% of the total votes.\n\n61.3% of the total votes is the remaining unsecured votes:\n\n0.613 * T = 0.613 * 9,000,467\n0.613 * T ≈ 5,514,686 votes were unsecured\n\nNow, we'll calculate the percentage of these unsecured votes that the additional votes needed represent:\n\n(Additional votes needed / Unsecured votes) * 100 = Percentage of unsecured votes needed\n(1,104,456 / 5,514,686) * 100 ≈ 20.03%\n\nSo, Mr. Jackson would have needed approximately 20.03% of the remaining unsecured votes to reach the victory threshold of 51%."} ``` ### Data Fields The dataset comprises the following fields: - `question`: a string containing the question to be answered. - `answer`: a string containing the answer to the corresponding question. ### Data Splits The dataset is split into a training set. The number of rows in each split is as follows: - `train`: 200,035 rows The `DatasetDict` structure for the dataset is as follows: ```python DatasetDict({ 'train': Dataset({ features: ['question', 'answer'], num_rows: 200035 }) }) ``` Each split in the `DatasetDict` contains a `Dataset` object with the specified features and number of rows. ## Dataset Creation Please refer to [Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf) for details about the dataset construction. ### Source Data - [Lila](https://huggingface.co/datasets/allenai/lila) - [DMath](https://arxiv.org/ftp/arxiv/papers/2106/2106.15772.pdf) #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> Please refer to [Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf) for details about the dataset construction. #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> Microsoft #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> We expanded a seed set of questions using Azure GPT-4 Trubo. The answers to those questions are generated using Azure GPT-4 Trubo. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> None ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This dataset is in English and contains only math word problems. ## Citation If you find this work useful in your method, you can cite the paper as below: ``` @misc{mitra2024orcamath, title={Orca-Math: Unlocking the potential of SLMs in Grade School Math}, author={Arindam Mitra and Hamed Khanpour and Corby Rosset and Ahmed Awadallah}, year={2024}, eprint={2402.14830}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Dataset Card Contact [Arindam Mitra](armitra@microsoft.com)
umesh16071973/Floorplan_Dataset_22022024
--- license: mit ---
rohk/hipparcos-catalog
--- license: cc0-1.0 ---
HuggingFaceM4/HatefulMemes_support_query_sets
Invalid username or password.
elprofecoss/sentiment-banking
--- dataset_info: features: - name: text dtype: string - name: inputs struct: - name: text dtype: string - name: prediction list: - name: label dtype: string - name: score dtype: float64 - name: prediction_agent dtype: string - name: annotation dtype: 'null' - name: annotation_agent dtype: 'null' - name: multi_label dtype: bool - name: explanation dtype: 'null' - name: id dtype: 'null' - name: metadata struct: - name: category dtype: int64 - name: status dtype: string - name: event_timestamp dtype: 'null' - name: metrics dtype: 'null' splits: - name: train num_bytes: 1205760 num_examples: 5001 download_size: 449589 dataset_size: 1205760 configs: - config_name: default data_files: - split: train path: data/train-* ---
result-kand2-sdxl-wuerst-karlo/2b270b48
--- dataset_info: features: - name: result dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 162 num_examples: 10 download_size: 1306 dataset_size: 162 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "2b270b48" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Nexdata/700_Thousand_Sets_Image_Caption_Data_Of_General_Scenes
--- license: cc-by-nc-nd-4.0 --- ## Description 1,000,000 sets of images and descriptions,the types of pictures include landscapes, animals, flowers and trees, people, cars, sports, industries, and buildings. Category and an aesthetic subset, each image has no less than two descriptions, each with one sentence; a small number of images have only one description, and the description languages are English and Chinese For more details, please refer to the link: https://www.nexdata.ai/dataset/1331?source=Huggingface ## Data size 1,000,000 sets of images and descriptions ## Image type covers landscapes, animals, flowers and trees, people, cars, sports, industry, and architecture, as well as an aesthetic subset ## Data format image format is .jpg, text format is .txt ## Description language Chinese, English ## Text length in principle, a single sentence should be 5-20 characters, and each picture should cover no less than two types of descriptions, each with one sentence; a few images have only one description ## Main description content the main scene or some salient features in the image ## Accuracy rate the proportion of correctly labeled images is not less than 95% # Licensing Information Commercial License
enoahjr/twitter_dataset_1713196848
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 1526431 num_examples: 4698 download_size: 796420 dataset_size: 1526431 configs: - config_name: default data_files: - split: train path: data/train-* ---
BangumiBase/helck
--- license: mit tags: - art size_categories: - 1K<n<10K --- # Bangumi Image Base of Helck This is the image base of bangumi Helck, we detected 45 characters, 2595 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 570 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 50 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 39 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 38 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 17 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 96 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 25 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 9 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 60 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 30 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 42 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 9 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 17 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 12 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 15 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 17 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 63 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 12 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 118 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 32 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 89 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 40 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 30 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 78 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | ![preview 8](23/preview_8.png) | | 24 | 501 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 40 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 44 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 19 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 20 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | 29 | 19 | [Download](29/dataset.zip) | ![preview 1](29/preview_1.png) | ![preview 2](29/preview_2.png) | ![preview 3](29/preview_3.png) | ![preview 4](29/preview_4.png) | ![preview 5](29/preview_5.png) | ![preview 6](29/preview_6.png) | ![preview 7](29/preview_7.png) | ![preview 8](29/preview_8.png) | | 30 | 17 | [Download](30/dataset.zip) | ![preview 1](30/preview_1.png) | ![preview 2](30/preview_2.png) | ![preview 3](30/preview_3.png) | ![preview 4](30/preview_4.png) | ![preview 5](30/preview_5.png) | ![preview 6](30/preview_6.png) | ![preview 7](30/preview_7.png) | ![preview 8](30/preview_8.png) | | 31 | 44 | [Download](31/dataset.zip) | ![preview 1](31/preview_1.png) | ![preview 2](31/preview_2.png) | ![preview 3](31/preview_3.png) | ![preview 4](31/preview_4.png) | ![preview 5](31/preview_5.png) | ![preview 6](31/preview_6.png) | ![preview 7](31/preview_7.png) | ![preview 8](31/preview_8.png) | | 32 | 7 | [Download](32/dataset.zip) | ![preview 1](32/preview_1.png) | ![preview 2](32/preview_2.png) | ![preview 3](32/preview_3.png) | ![preview 4](32/preview_4.png) | ![preview 5](32/preview_5.png) | ![preview 6](32/preview_6.png) | ![preview 7](32/preview_7.png) | N/A | | 33 | 79 | [Download](33/dataset.zip) | ![preview 1](33/preview_1.png) | ![preview 2](33/preview_2.png) | ![preview 3](33/preview_3.png) | ![preview 4](33/preview_4.png) | ![preview 5](33/preview_5.png) | ![preview 6](33/preview_6.png) | ![preview 7](33/preview_7.png) | ![preview 8](33/preview_8.png) | | 34 | 11 | [Download](34/dataset.zip) | ![preview 1](34/preview_1.png) | ![preview 2](34/preview_2.png) | ![preview 3](34/preview_3.png) | ![preview 4](34/preview_4.png) | ![preview 5](34/preview_5.png) | ![preview 6](34/preview_6.png) | ![preview 7](34/preview_7.png) | ![preview 8](34/preview_8.png) | | 35 | 18 | [Download](35/dataset.zip) | ![preview 1](35/preview_1.png) | ![preview 2](35/preview_2.png) | ![preview 3](35/preview_3.png) | ![preview 4](35/preview_4.png) | ![preview 5](35/preview_5.png) | ![preview 6](35/preview_6.png) | ![preview 7](35/preview_7.png) | ![preview 8](35/preview_8.png) | | 36 | 45 | [Download](36/dataset.zip) | ![preview 1](36/preview_1.png) | ![preview 2](36/preview_2.png) | ![preview 3](36/preview_3.png) | ![preview 4](36/preview_4.png) | ![preview 5](36/preview_5.png) | ![preview 6](36/preview_6.png) | ![preview 7](36/preview_7.png) | ![preview 8](36/preview_8.png) | | 37 | 20 | [Download](37/dataset.zip) | ![preview 1](37/preview_1.png) | ![preview 2](37/preview_2.png) | ![preview 3](37/preview_3.png) | ![preview 4](37/preview_4.png) | ![preview 5](37/preview_5.png) | ![preview 6](37/preview_6.png) | ![preview 7](37/preview_7.png) | ![preview 8](37/preview_8.png) | | 38 | 12 | [Download](38/dataset.zip) | ![preview 1](38/preview_1.png) | ![preview 2](38/preview_2.png) | ![preview 3](38/preview_3.png) | ![preview 4](38/preview_4.png) | ![preview 5](38/preview_5.png) | ![preview 6](38/preview_6.png) | ![preview 7](38/preview_7.png) | ![preview 8](38/preview_8.png) | | 39 | 9 | [Download](39/dataset.zip) | ![preview 1](39/preview_1.png) | ![preview 2](39/preview_2.png) | ![preview 3](39/preview_3.png) | ![preview 4](39/preview_4.png) | ![preview 5](39/preview_5.png) | ![preview 6](39/preview_6.png) | ![preview 7](39/preview_7.png) | ![preview 8](39/preview_8.png) | | 40 | 30 | [Download](40/dataset.zip) | ![preview 1](40/preview_1.png) | ![preview 2](40/preview_2.png) | ![preview 3](40/preview_3.png) | ![preview 4](40/preview_4.png) | ![preview 5](40/preview_5.png) | ![preview 6](40/preview_6.png) | ![preview 7](40/preview_7.png) | ![preview 8](40/preview_8.png) | | 41 | 10 | [Download](41/dataset.zip) | ![preview 1](41/preview_1.png) | ![preview 2](41/preview_2.png) | ![preview 3](41/preview_3.png) | ![preview 4](41/preview_4.png) | ![preview 5](41/preview_5.png) | ![preview 6](41/preview_6.png) | ![preview 7](41/preview_7.png) | ![preview 8](41/preview_8.png) | | 42 | 10 | [Download](42/dataset.zip) | ![preview 1](42/preview_1.png) | ![preview 2](42/preview_2.png) | ![preview 3](42/preview_3.png) | ![preview 4](42/preview_4.png) | ![preview 5](42/preview_5.png) | ![preview 6](42/preview_6.png) | ![preview 7](42/preview_7.png) | ![preview 8](42/preview_8.png) | | 43 | 6 | [Download](43/dataset.zip) | ![preview 1](43/preview_1.png) | ![preview 2](43/preview_2.png) | ![preview 3](43/preview_3.png) | ![preview 4](43/preview_4.png) | ![preview 5](43/preview_5.png) | ![preview 6](43/preview_6.png) | N/A | N/A | | noise | 126 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
Shashwath01/VQARAD_SLAKE
--- license: apache-2.0 --- The following dataset is a combination of VQARAD and SLAKE, for medical visual question answering. The dataset has been used to the train the following model - https://huggingface.co/Shashwath01/Idefic_medical_VQA_merged_4bit Download the images.zip file and dataset.csv file , and add all the image paths to the dataset.csv file
Slime99gang/raiamsantosvoz
--- license: openrail ---
AIBreeding/XAI
--- license: apache-2.0 --- ## Dataset - For the present study, we used data from the GxE competition advocated by the G2F project in 2022 (https://www.maizegxeprediction2022.org/), including genetic markers (G2F-G) for maize inbred lines, phenotypic measurements (G2F-P) collected throughout each growing season, metadata (G2F-M) for each field trial, environmental covariate (EC) data, and environmental (G2F-E) data. G2F-E data were mainly climatic and soil variables captured during crop development in each experimental trial.<br> - In order to explore the influence of environmental factors on yield prediction results, we designed two sets of prediction scenarios: **1)** yield prediction based on the whole genome, and **2)** yield prediction integrating genome, weather and soil factors. Different data sets are generated for different prediction scenarios.<br> - ***For a detailed description of this dataset, please refer to the methods section of the paper.*** **Dataset file structure directory** ``` ├─test_set │ New_test_values.csv │ test_G.csv │ test_GE.csv │ └─train_set G.csv GE.csv New_Yield_values.csv train_Yield_folds.csv ``` **Description** **train_set** - ***G.csv***<br> Genome-wide principal component data used to train the G2P model. - ***GE.csv***<br> The data was integrated from genome-wide principal component data, weather data and soil data. - ***train_Yield_folds.csv***<br> The dataset is a ten-fold cross-validation dataset generated by the kfolds.py script for model training and testing.<br> - ***New_Yield_values.csv*** This dataset is assembled from the base model predictions and is primarily used to train the second layer of models in the stacking framework.<br> **test_set** - ***test_G.csv*** This dataset is a predicted population of target genotypes from an untested environment and is used to validate the predictive performance of the model when environmental effects are ignored.<br> - ***test_GE.csv*** This dataset was integrated from genotype and environment to validate the predictive performance of the model across environments under environmental stress.<br> - ***New_test_values.csv*** This dataset is composed of the predicted values of the base model in the new environment and is used as a prediction set for the second layer of the model in the stacking framework.
C-MTEB/EcomRetrieval
--- configs: - config_name: default data_files: - split: corpus path: data/corpus-* - split: queries path: data/queries-* dataset_info: features: - name: id dtype: string - name: text dtype: string splits: - name: corpus num_bytes: 9930587 num_examples: 100902 - name: queries num_bytes: 32376 num_examples: 1000 download_size: 8448455 dataset_size: 9962963 --- # Dataset Card for "EcomRetrieval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
freshpearYoon/v3_val_free_6
--- dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 13991580576 num_examples: 9105 download_size: 2123460884 dataset_size: 13991580576 configs: - config_name: default data_files: - split: train path: data/train-* ---
CyberHarem/uzuki_kantaicollection
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of uzuki (Kantai Collection) This is the dataset of uzuki (Kantai Collection), containing 500 images and their tags. The core tags of this character are `long_hair, hair_ornament, crescent_hair_ornament, ahoge, red_eyes, rabbit_hair_ornament, ribbon, pink_hair, hair_ribbon, very_long_hair, low-tied_long_hair, red_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 489.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uzuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 341.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uzuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1209 | 724.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uzuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 460.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uzuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1209 | 913.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/uzuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/uzuki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_skirt, blue_shirt, blush, crescent, long_sleeves, pleated_skirt, serafuku, solo, black_sailor_collar, frilled_skirt, hair_between_eyes, blue_socks, bangs, kneehighs, looking_at_viewer, open_mouth, pink_ribbon, fang, black_ribbon, neck_ribbon, shoes, rabbit_ears, :d, standing, grey_footwear, :3, navel, kemonomimi_mode | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_skirt, blue_shirt, blush, crescent, long_sleeves, pleated_skirt, solo, black_sailor_collar, frilled_skirt, blue_serafuku, cowboy_shot, looking_at_viewer, open_mouth, hair_between_eyes, smile, fang | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_sailor_collar, black_skirt, crescent, pleated_skirt, serafuku, simple_background, solo, blue_shirt, white_background, frilled_skirt, full_body, looking_at_viewer, open_mouth, smile, chibi, kneehighs, standing | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, crescent, looking_at_viewer, open_mouth, serafuku, skirt, smile, solo, fang, rabbit_ears, blush | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, crescent, looking_at_viewer, serafuku, simple_background, solo, white_background, open_mouth, skirt, blush | | 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blue_shirt, crescent, open_mouth, serafuku, solo, upper_body, black_sailor_collar, blush, fang, long_sleeves, looking_at_viewer, neck_ribbon, :d, black_ribbon, hair_between_eyes, pink_ribbon, rabbit_ears, bangs, pink_background, :3, outline, gradient_background | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | crescent, serafuku, skirt, 2girls, blush, open_mouth, navel, socks, looking_at_viewer, pink_eyes, short_hair_with_long_locks, smile, solo_focus, blue_eyes, purple_hair | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, crescent, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, strapless_leotard, black_leotard, black_pantyhose, bowtie, covered_navel, looking_at_viewer, wrist_cuffs, high_heels, open_mouth, black_footwear, bunny_pose, dated, rabbit_tail, small_breasts, smile, white_background | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, alternate_costume, crescent, obi, solo, floral_print, looking_at_viewer, smile, full_body, open_mouth, pink_kimono, sandals, tabi, wide_sleeves, alternate_hairstyle, braid, happy_new_year, long_sleeves, upper_body | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_skirt | blue_shirt | blush | crescent | long_sleeves | pleated_skirt | serafuku | solo | black_sailor_collar | frilled_skirt | hair_between_eyes | blue_socks | bangs | kneehighs | looking_at_viewer | open_mouth | pink_ribbon | fang | black_ribbon | neck_ribbon | shoes | rabbit_ears | :d | standing | grey_footwear | :3 | navel | kemonomimi_mode | blue_serafuku | cowboy_shot | smile | simple_background | white_background | full_body | chibi | skirt | upper_body | pink_background | outline | gradient_background | 2girls | socks | pink_eyes | short_hair_with_long_locks | solo_focus | blue_eyes | purple_hair | detached_collar | fake_animal_ears | playboy_bunny | strapless_leotard | black_leotard | black_pantyhose | bowtie | covered_navel | wrist_cuffs | high_heels | black_footwear | bunny_pose | dated | rabbit_tail | small_breasts | alternate_costume | obi | floral_print | pink_kimono | sandals | tabi | wide_sleeves | alternate_hairstyle | braid | happy_new_year | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:--------|:-----------|:---------------|:----------------|:-----------|:-------|:----------------------|:----------------|:--------------------|:-------------|:--------|:------------|:--------------------|:-------------|:--------------|:-------|:---------------|:--------------|:--------|:--------------|:-----|:-----------|:----------------|:-----|:--------|:------------------|:----------------|:--------------|:--------|:--------------------|:-------------------|:------------|:--------|:--------|:-------------|:------------------|:----------|:----------------------|:---------|:--------|:------------|:-----------------------------|:-------------|:------------|:--------------|:------------------|:-------------------|:----------------|:--------------------|:----------------|:------------------|:---------|:----------------|:--------------|:-------------|:-----------------|:-------------|:--------|:--------------|:----------------|:--------------------|:------|:---------------|:--------------|:----------|:-------|:---------------|:----------------------|:--------|:-----------------| | 0 | 25 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | X | X | X | | | | X | X | | X | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 13 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | X | | X | X | X | X | X | | | | X | X | X | | | | | | | | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | X | | | X | X | | | | | | | X | X | | X | | | | X | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 5 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | X | | | X | X | | | | | | | X | X | | | | | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | X | X | X | X | | X | X | X | | X | | X | | X | X | X | X | X | X | | X | X | | | X | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 8 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | | | X | X | | | X | | | | | | | | X | X | | | | | | | | | | | X | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | | | X | | | | X | | | | | | | X | X | | | | | | X | | | | | | | | | X | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | X | X | | | X | | | | | | | X | X | | | | | | | | | | | | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
huypn16/annotated-spoc
--- dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: probid dtype: string - name: subid dtype: int64 - name: text dtype: string - name: code dtype: string - name: input_specs dtype: string splits: - name: train num_bytes: 25397881 num_examples: 14548 - name: test num_bytes: 3800065 num_examples: 1778 download_size: 5454735 dataset_size: 29197946 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
ahishamm/combined_masks
--- dataset_info: features: - name: image dtype: image - name: label dtype: image splits: - name: train num_bytes: 84421868.7 num_examples: 2950 download_size: 48117493 dataset_size: 84421868.7 --- # Dataset Card for "combined_masks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Fiaa/ArtGenreTest
--- configs: - config_name: default data_files: - split: val path: data/val-* - split: test path: data/test-* dataset_info: features: - name: idx dtype: int32 - name: image1 dtype: image - name: image2 dtype: image - name: image3 dtype: image - name: image4 dtype: image - name: choices sequence: string splits: - name: val num_bytes: 240272359.0 num_examples: 150 - name: test num_bytes: 245745240.0 num_examples: 150 download_size: 480433760 dataset_size: 486017599.0 --- ## Source dataset created from https://huggingface.co/datasets/huggan/wikiart ## Task Find two images that have the same genre ## Prompt: ``` The most common method for classifying art paintings is by genre (or theme). The term “genre” refers to the type of image that serves as the subject of a painting. The genre of a painting is classified as a cityscape, landscape, nude painting, portrait, religious painting, sketch and study, or still life. Given the four images of art paintings, can you tell which two of them have the same genre? Select between the following choices. (A) ... (B) ... (C) ... (D) ... ``` --- license: apache-2.0 dataset_info: features: - name: idx dtype: int32 - name: image1 dtype: image - name: image2 dtype: image - name: image3 dtype: image - name: image4 dtype: image - name: choices sequence: string splits: - name: test num_bytes: 486017599.0 num_examples: 300 download_size: 480853119 dataset_size: 486017599.0 configs: - config_name: default data_files: - split: test path: data/test-* ---
PurCL/malware-top-100
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: valid path: data/valid-* dataset_info: features: - name: binary_name dtype: string - name: labels sequence: string - name: functions dtype: string splits: - name: train num_bytes: 5667834326.115244 num_examples: 3728 - name: test num_bytes: 1667814982.765135 num_examples: 1097 - name: valid num_bytes: 1001905263.1196207 num_examples: 659 download_size: 2454551882 dataset_size: 8337554571.999999 --- # Dataset Card for "malware-top-100" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuvalkirstain/portrait_dreambooth
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 123049169.0 num_examples: 286 - name: validation num_bytes: 7122908.0 num_examples: 20 download_size: 123406667 dataset_size: 130172077.0 --- # Dataset Card for "portrait_dreambooth" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AdapterOcean/med_alpaca_standardized_cluster_52_std
--- dataset_info: features: - name: message dtype: string - name: message_type dtype: string - name: message_id dtype: int64 - name: conversation_id dtype: int64 - name: cluster dtype: float64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 31112986 num_examples: 45428 download_size: 15270814 dataset_size: 31112986 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "med_alpaca_standardized_cluster_52_std" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joey234/mmlu-high_school_biology-neg-answer
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: neg_answer dtype: string splits: - name: test num_bytes: 126929 num_examples: 310 download_size: 74346 dataset_size: 126929 --- # Dataset Card for "mmlu-high_school_biology-neg-answer" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ArmurAI/Solana_vulnerability_audit_dataset
--- license: mit ---
harpreetsahota/gemma_vibecheck_testprompts
--- language: - en dataset_info: features: - name: category dtype: string - name: prompt dtype: string - name: DeciLM-7B-Instruct dtype: string - name: Gemma-7B-it dtype: string - name: coherence_DeciLM-7B-Instruct struct: - name: reasoning dtype: string - name: score dtype: int64 - name: value dtype: string - name: coherence_Gemma-7B-it struct: - name: reasoning dtype: string - name: score dtype: int64 - name: value dtype: string splits: - name: train num_bytes: 108228 num_examples: 30 download_size: 69559 dataset_size: 108228 configs: - config_name: default data_files: - split: train path: data/train-* ---
jsonifize/SlimOrca-Dedup-4keys
--- language: - en tags: - jsonifize - NLP pretty_name: SlimOrca-Dedup-4keys source_datasets: - Open-Orca/SlimOrca-Dedup dataset_info: features: - name: line dtype: string - name: values dtype: string - name: schema dtype: string - name: randomvalueremoved dtype: string --- # Open-Orca/SlimOrca-Dedup ``` { "processed": true, "4keys": true, "jsonifize": true, "uploaded": true } ``` LICENSE FOUND AT: https://huggingface.co/datasetsOpen-Orca/SlimOrca-Dedup Reformatting generated by [AlignmentLab.AI](https://Alignmentlab.ai) please refer to the original authors work for attribution # line ``` {'conversations': [{'from': 'system', 'value': 'You are an AI assistant. You will be given a task. You must generate a detailed and long answer.'}, {'from': 'human', 'value': 'Write the next sentence. Brett thought it would be romantic to go on a picnic but Jason thought there were too many bugs. Available options: *Brett suggested they eat inside instead.; *Jason suggested they eat inside instead.; Answer:'}, {'from': 'gpt', 'value': "Jason, after acknowledging Brett's good intentions but expressing his concern about the pesky insects, came up with a considerate compromise and suggested they bring the romantic atmosphere indoors by creating an indoor picnic, complete with a cozy setup and all their favorite foods, in the ease and comfort of their home, away from the irritating bugs, while still maintaining the essence of their initial plan, ultimately strengthening their bond and ensuring a memorable experience for both."}]} ``` # schema ``` {'conversations': [{'from': None, 'value': None}, {'from': None, 'value': None}, {'from': None, 'value': None}]} ``` # values ``` system You are an AI assistant. You will be given a task. You must generate a detailed and long answer. human Write the next sentence. Brett thought it would be romantic to go on a picnic but Jason thought there were too many bugs. Available options: *Brett suggested they eat inside instead.; *Jason suggested they eat inside instead.; Answer: gpt Jason, after acknowledging Brett's good intentions but expressing his concern about the pesky insects, came up with a considerate compromise and suggested they bring the romantic atmosphere indoors by creating an indoor picnic, complete with a cozy setup and all their favorite foods, in the ease and comfort of their home, away from the irritating bugs, while still maintaining the essence of their initial plan, ultimately strengthening their bond and ensuring a memorable experience for both. ``` # randomvalueremoved ``` {'conversations': [{'from': None, 'value': 'You are an AI assistant. You will be given a task. You must generate a detailed and long answer.'}, {'from': None, 'value': None}, {'from': None, 'value': None}]} ```
gayanin/pubmed-abstracts-dist-noised-v2
--- dataset_info: - config_name: babylon-01 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6595134 num_examples: 24908 - name: test num_bytes: 816662 num_examples: 3113 - name: validation num_bytes: 798507 num_examples: 3114 download_size: 4608640 dataset_size: 8210303 - config_name: babylon-02 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6649302 num_examples: 24908 - name: test num_bytes: 823956 num_examples: 3113 - name: validation num_bytes: 804360 num_examples: 3114 download_size: 4709047 dataset_size: 8277618 - config_name: babylon-03 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6704389 num_examples: 24908 - name: test num_bytes: 830323 num_examples: 3113 - name: validation num_bytes: 811675 num_examples: 3114 download_size: 4797674 dataset_size: 8346387 - config_name: gcd-01 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6448182 num_examples: 24908 - name: test num_bytes: 813378 num_examples: 3113 - name: validation num_bytes: 802452 num_examples: 3114 download_size: 4503363 dataset_size: 8064012 - config_name: gcd-02 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6431922 num_examples: 24908 - name: test num_bytes: 810489 num_examples: 3113 - name: validation num_bytes: 800488 num_examples: 3114 download_size: 4550524 dataset_size: 8042899 - config_name: gcd-03 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 6416391 num_examples: 24908 - name: test num_bytes: 808759 num_examples: 3113 - name: validation num_bytes: 797257 num_examples: 3114 download_size: 4584136 dataset_size: 8022407 - config_name: kaggle-01 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 5836542 num_examples: 24908 - name: test num_bytes: 803285 num_examples: 3114 - name: validation num_bytes: 801836 num_examples: 3114 download_size: 4176206 dataset_size: 7441663 - config_name: kaggle-02 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 5811500 num_examples: 24908 - name: test num_bytes: 801272 num_examples: 3114 - name: validation num_bytes: 798472 num_examples: 3114 download_size: 4210456 dataset_size: 7411244 - config_name: kaggle-03 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 5789912 num_examples: 24908 - name: test num_bytes: 797824 num_examples: 3114 - name: validation num_bytes: 796074 num_examples: 3114 download_size: 4237457 dataset_size: 7383810 - config_name: kaggle-04 features: - name: refs dtype: string - name: trans dtype: string splits: - name: train num_bytes: 5761003 num_examples: 24908 - name: test num_bytes: 794947 num_examples: 3114 - name: validation num_bytes: 792732 num_examples: 3114 download_size: 4253250 dataset_size: 7348682 configs: - config_name: babylon-01 data_files: - split: train path: babylon-01/train-* - split: test path: babylon-01/test-* - split: validation path: babylon-01/validation-* - config_name: babylon-02 data_files: - split: train path: babylon-02/train-* - split: test path: babylon-02/test-* - split: validation path: babylon-02/validation-* - config_name: babylon-03 data_files: - split: train path: babylon-03/train-* - split: test path: babylon-03/test-* - split: validation path: babylon-03/validation-* - config_name: gcd-01 data_files: - split: train path: gcd-01/train-* - split: test path: gcd-01/test-* - split: validation path: gcd-01/validation-* - config_name: gcd-02 data_files: - split: train path: gcd-02/train-* - split: test path: gcd-02/test-* - split: validation path: gcd-02/validation-* - config_name: gcd-03 data_files: - split: train path: gcd-03/train-* - split: test path: gcd-03/test-* - split: validation path: gcd-03/validation-* - config_name: kaggle-01 data_files: - split: train path: kaggle-01/train-* - split: test path: kaggle-01/test-* - split: validation path: kaggle-01/validation-* - config_name: kaggle-02 data_files: - split: train path: kaggle-02/train-* - split: test path: kaggle-02/test-* - split: validation path: kaggle-02/validation-* - config_name: kaggle-03 data_files: - split: train path: kaggle-03/train-* - split: test path: kaggle-03/test-* - split: validation path: kaggle-03/validation-* - config_name: kaggle-04 data_files: - split: train path: kaggle-04/train-* - split: test path: kaggle-04/test-* - split: validation path: kaggle-04/validation-* ---
liuyanchen1015/MULTI_VALUE_stsb_too_sub
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: score dtype: float64 - name: idx dtype: int64 - name: value_score dtype: int64 splits: - name: dev num_bytes: 3692 num_examples: 19 - name: test num_bytes: 1423 num_examples: 12 - name: train num_bytes: 3188 num_examples: 17 download_size: 14528 dataset_size: 8303 --- # Dataset Card for "MULTI_VALUE_stsb_too_sub" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
DolphinNie/Dungeon_Map
--- license: agpl-3.0 ---
p1atdev/pinterest
--- license: cc0-1.0 --- This dataset was created using [this tool](https://github.com/p1atdev/pinterest-wizard). # concept_art.json 589 images about "Concept Art" from **Pinterest searches** 🤗. # double_exposure.json 597 images about "Double Exposure" from **Pinterest searches** 🤗. # vaporwave.json 599 images about "Vaporwave" from **Pinterest searches** 🤗. # typography.json 633 images about "Typography" from **Pinterest searches** 🤗. # portrait.json 573 images about "Portrait" from **Pinterest searches** 🤗. # selfie.json 584 images about "自撮り 女の子" from **Pinterest searches** 🤗. # Type ```ts interface Pinterest { url: string // pinterest page url alt: string // description of the image (not so accurate everytime) src: string // image url tags: string[] // related tags } ```
cavendishlabs/rebus
--- dataset_info: features: - name: Filename dtype: string - name: Solution dtype: string - name: Also accept dtype: string - name: Theme dtype: string - name: Difficulty dtype: string - name: Exact spelling? dtype: string - name: Specific reference dtype: string - name: Reading? dtype: string - name: Attribution dtype: string - name: Author dtype: string - name: image dtype: image splits: - name: train num_bytes: 51545282.0 num_examples: 333 download_size: 47656838 dataset_size: 51545282.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # REBUS REBUS: A Robust Evaluation Benchmark of Understanding Symbols [**Paper**](https://arxiv.org/abs/2401.05604) | [**🤗 Dataset**](https://huggingface.co/datasets/cavendishlabs/rebus) | [**GitHub**](https://github.com/cvndsh/rebus) | [**Website**](https://cavendishlabs.org/rebus/) ## Introduction Recent advances in large language models have led to the development of multimodal LLMs (MLLMs), which take both image data and text as an input. Virtually all of these models have been announced within the past year, leading to a significant need for benchmarks evaluating the abilities of these models to reason truthfully and accurately on a diverse set of tasks. When Google announced Gemini Pro (Gemini Team et al., 2023), they displayed its ability to solve rebuses—wordplay puzzles which involve creatively adding and subtracting letters from words derived from text and images. The diversity of rebuses allows for a broad evaluation of multimodal reasoning capabilities, including image recognition, multi-step reasoning, and understanding the human creator's intent. We present REBUS: a collection of 333 hand-crafted rebuses spanning 13 diverse categories, including hand-drawn and digital images created by nine contributors. Samples are presented in the table below. Notably, GPT-4V, the most powerful model we evaluated, answered only 24% of puzzles correctly, highlighting the poor capabilities of MLLMs in new and unexpected domains to which human reasoning generalizes with comparative ease. Open-source models perform even worse, with a median accuracy below 1%. We notice that models often give faithless explanations, fail to change their minds after an initial approach doesn't work, and remain highly uncalibrated on their own abilities. ![image](https://github.com/cvndsh/rebus/assets/10122030/131bde1a-9a09-44cc-abc3-efe874b95b23) ## Evaluation results | Model | Overall | Easy | Medium | Hard | | ----------------- | ------------- | ------------- | ------------- | ------------ | | GPT-4V | **24.0** | **33.0** | **13.2** | **7.1** | | Gemini Pro | 13.2 | 19.4 | 5.3 | 3.6 | | LLaVa-1.5-13B | 1.8 | 2.6 | 0.9 | 0.0 | | LLaVa-1.5-7B | 1.5 | 2.6 | 0.0 | 0.0 | | BLIP2-FLAN-T5-XXL | 0.9 | 0.5 | 1.8 | 0.0 | | CogVLM | 0.9 | 1.6 | 0.0 | 0.0 | | QWEN | 0.9 | 1.6 | 0.0 | 0.0 | | InstructBLIP | 0.6 | 0.5 | 0.9 | 0.0 |
heliosprime/twitter_dataset_1713169920
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 5447 num_examples: 15 download_size: 9936 dataset_size: 5447 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "twitter_dataset_1713169920" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
alesanm/balenciaga_short_descriptions
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 268651301.0 num_examples: 141 download_size: 268647397 dataset_size: 268651301.0 --- # Dataset Card for "balenciaga_short_descriptions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_SciPhi__SciPhi-Self-RAG-Mistral-7B-32k
--- pretty_name: Evaluation run of SciPhi/SciPhi-Self-RAG-Mistral-7B-32k dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [SciPhi/SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 64 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SciPhi__SciPhi-Self-RAG-Mistral-7B-32k_public\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-11-20T07:34:10.299317](https://huggingface.co/datasets/open-llm-leaderboard/details_SciPhi__SciPhi-Self-RAG-Mistral-7B-32k_public/blob/main/results_2023-11-20T07-34-10.299317.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6025602935621713,\n\ \ \"acc_stderr\": 0.03298893084060298,\n \"acc_norm\": 0.6108691668755563,\n\ \ \"acc_norm_stderr\": 0.033697327638679664,\n \"mc1\": 0.3011015911872705,\n\ \ \"mc1_stderr\": 0.016058999026100612,\n \"mc2\": 0.45630119540652325,\n\ \ \"mc2_stderr\": 0.015576968550952539,\n \"em\": 0.2489513422818792,\n\ \ \"em_stderr\": 0.004428237695563259,\n \"f1\": 0.31078754194630925,\n\ \ \"f1_stderr\": 0.004386404073454112\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5341296928327645,\n \"acc_stderr\": 0.0145773113152311,\n\ \ \"acc_norm\": 0.5733788395904437,\n \"acc_norm_stderr\": 0.014453185592920293\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6132244572794264,\n\ \ \"acc_stderr\": 0.004860162076330984,\n \"acc_norm\": 0.8044214299940251,\n\ \ \"acc_norm_stderr\": 0.003958347934520328\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.48148148148148145,\n\ \ \"acc_stderr\": 0.043163785995113245,\n \"acc_norm\": 0.48148148148148145,\n\ \ \"acc_norm_stderr\": 0.043163785995113245\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.0387813988879761,\n\ \ \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.0387813988879761\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\ \ \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \ \ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6264150943396226,\n \"acc_stderr\": 0.029773082713319878,\n\ \ \"acc_norm\": 0.6264150943396226,\n \"acc_norm_stderr\": 0.029773082713319878\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n\ \ \"acc_stderr\": 0.03800968060554859,\n \"acc_norm\": 0.7083333333333334,\n\ \ \"acc_norm_stderr\": 0.03800968060554859\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.44,\n \"acc_stderr\": 0.0498887651569859,\n \ \ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.0498887651569859\n },\n\ \ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n\ \ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \ \ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \ \ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\ \ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\ \ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\ \ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\ \ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.5063829787234042,\n \"acc_stderr\": 0.032683358999363366,\n\ \ \"acc_norm\": 0.5063829787234042,\n \"acc_norm_stderr\": 0.032683358999363366\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.38596491228070173,\n\ \ \"acc_stderr\": 0.045796394220704334,\n \"acc_norm\": 0.38596491228070173,\n\ \ \"acc_norm_stderr\": 0.045796394220704334\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n\ \ \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.42063492063492064,\n \"acc_stderr\": 0.025424835086923992,\n \"\ acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086923992\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n\ \ \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n\ \ \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \ \ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7258064516129032,\n\ \ \"acc_stderr\": 0.0253781399708852,\n \"acc_norm\": 0.7258064516129032,\n\ \ \"acc_norm_stderr\": 0.0253781399708852\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n\ \ \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\"\ : 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\ \ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270286,\n \"\ acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270286\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.02649905770139746,\n\ \ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.02649905770139746\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.024697216930878937,\n\ \ \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.024697216930878937\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.2962962962962963,\n \"acc_stderr\": 0.027840811495871934,\n \ \ \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.027840811495871934\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \ \ \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\ acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8128440366972477,\n \"acc_stderr\": 0.016722684526200144,\n \"\ acc_norm\": 0.8128440366972477,\n \"acc_norm_stderr\": 0.016722684526200144\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"\ acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.7794117647058824,\n \"acc_stderr\": 0.029102254389674082,\n \"\ acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.029102254389674082\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7637130801687764,\n \"acc_stderr\": 0.02765215314415927,\n \ \ \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.02765215314415927\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\ \ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\ \ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.039153454088478354,\n\ \ \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.039153454088478354\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"\ acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\ \ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\ \ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n\ \ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n\ \ \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n\ \ \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\ \ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n\ \ \"acc_stderr\": 0.023636873317489267,\n \"acc_norm\": 0.8461538461538461,\n\ \ \"acc_norm_stderr\": 0.023636873317489267\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \ \ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\ \ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7994891443167306,\n\ \ \"acc_stderr\": 0.014317653708594204,\n \"acc_norm\": 0.7994891443167306,\n\ \ \"acc_norm_stderr\": 0.014317653708594204\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436593,\n\ \ \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436593\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37094972067039106,\n\ \ \"acc_stderr\": 0.016155910721341763,\n \"acc_norm\": 0.37094972067039106,\n\ \ \"acc_norm_stderr\": 0.016155910721341763\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.6895424836601307,\n \"acc_stderr\": 0.026493033225145894,\n\ \ \"acc_norm\": 0.6895424836601307,\n \"acc_norm_stderr\": 0.026493033225145894\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n\ \ \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n\ \ \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409818,\n\ \ \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409818\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \ \ \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4589308996088657,\n\ \ \"acc_stderr\": 0.012727084826799798,\n \"acc_norm\": 0.4589308996088657,\n\ \ \"acc_norm_stderr\": 0.012727084826799798\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.028959755196824866,\n\ \ \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.028959755196824866\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6013071895424836,\n \"acc_stderr\": 0.019808281317449848,\n \ \ \"acc_norm\": 0.6013071895424836,\n \"acc_norm_stderr\": 0.019808281317449848\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\ \ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\ \ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.029504896454595957,\n\ \ \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.029504896454595957\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n\ \ \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n\ \ \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \ \ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\ \ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\ \ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8011695906432749,\n \"acc_stderr\": 0.030611116557432528,\n\ \ \"acc_norm\": 0.8011695906432749,\n \"acc_norm_stderr\": 0.030611116557432528\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3011015911872705,\n\ \ \"mc1_stderr\": 0.016058999026100612,\n \"mc2\": 0.45630119540652325,\n\ \ \"mc2_stderr\": 0.015576968550952539\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.7482241515390686,\n \"acc_stderr\": 0.01219848910025978\n\ \ },\n \"harness|drop|3\": {\n \"em\": 0.2489513422818792,\n \ \ \"em_stderr\": 0.004428237695563259,\n \"f1\": 0.31078754194630925,\n\ \ \"f1_stderr\": 0.004386404073454112\n },\n \"harness|gsm8k|5\": {\n\ \ \"acc\": 0.19711902956785443,\n \"acc_stderr\": 0.01095802163030063\n\ \ }\n}\n```" repo_url: https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|arc:challenge|25_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-11-20T07-34-10.299317.parquet' - config_name: harness_drop_3 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|drop|3_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|drop|3_2023-11-20T07-34-10.299317.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|gsm8k|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hellaswag|10_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-management|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-management|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-11-20T07-34-10.299317.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-international_law|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-management|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-marketing|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-sociology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-virology|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-11-20T07-34-10.299317.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|truthfulqa:mc|0_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-11-20T07-34-10.299317.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_11_20T07_34_10.299317 path: - '**/details_harness|winogrande|5_2023-11-20T07-34-10.299317.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-11-20T07-34-10.299317.parquet' - config_name: results data_files: - split: 2023_11_20T07_34_10.299317 path: - results_2023-11-20T07-34-10.299317.parquet - split: latest path: - results_2023-11-20T07-34-10.299317.parquet --- # Dataset Card for Evaluation run of SciPhi/SciPhi-Self-RAG-Mistral-7B-32k ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [SciPhi/SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SciPhi__SciPhi-Self-RAG-Mistral-7B-32k_public", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-11-20T07:34:10.299317](https://huggingface.co/datasets/open-llm-leaderboard/details_SciPhi__SciPhi-Self-RAG-Mistral-7B-32k_public/blob/main/results_2023-11-20T07-34-10.299317.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6025602935621713, "acc_stderr": 0.03298893084060298, "acc_norm": 0.6108691668755563, "acc_norm_stderr": 0.033697327638679664, "mc1": 0.3011015911872705, "mc1_stderr": 0.016058999026100612, "mc2": 0.45630119540652325, "mc2_stderr": 0.015576968550952539, "em": 0.2489513422818792, "em_stderr": 0.004428237695563259, "f1": 0.31078754194630925, "f1_stderr": 0.004386404073454112 }, "harness|arc:challenge|25": { "acc": 0.5341296928327645, "acc_stderr": 0.0145773113152311, "acc_norm": 0.5733788395904437, "acc_norm_stderr": 0.014453185592920293 }, "harness|hellaswag|10": { "acc": 0.6132244572794264, "acc_stderr": 0.004860162076330984, "acc_norm": 0.8044214299940251, "acc_norm_stderr": 0.003958347934520328 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.48148148148148145, "acc_stderr": 0.043163785995113245, "acc_norm": 0.48148148148148145, "acc_norm_stderr": 0.043163785995113245 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6513157894736842, "acc_stderr": 0.0387813988879761, "acc_norm": 0.6513157894736842, "acc_norm_stderr": 0.0387813988879761 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6264150943396226, "acc_stderr": 0.029773082713319878, "acc_norm": 0.6264150943396226, "acc_norm_stderr": 0.029773082713319878 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7083333333333334, "acc_stderr": 0.03800968060554859, "acc_norm": 0.7083333333333334, "acc_norm_stderr": 0.03800968060554859 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.44, "acc_stderr": 0.0498887651569859, "acc_norm": 0.44, "acc_norm_stderr": 0.0498887651569859 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5063829787234042, "acc_stderr": 0.032683358999363366, "acc_norm": 0.5063829787234042, "acc_norm_stderr": 0.032683358999363366 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.38596491228070173, "acc_stderr": 0.045796394220704334, "acc_norm": 0.38596491228070173, "acc_norm_stderr": 0.045796394220704334 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5310344827586206, "acc_stderr": 0.04158632762097828, "acc_norm": 0.5310344827586206, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42063492063492064, "acc_stderr": 0.025424835086923992, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.025424835086923992 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3888888888888889, "acc_stderr": 0.04360314860077459, "acc_norm": 0.3888888888888889, "acc_norm_stderr": 0.04360314860077459 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7258064516129032, "acc_stderr": 0.0253781399708852, "acc_norm": 0.7258064516129032, "acc_norm_stderr": 0.0253781399708852 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4729064039408867, "acc_stderr": 0.03512819077876106, "acc_norm": 0.4729064039408867, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7323232323232324, "acc_stderr": 0.03154449888270286, "acc_norm": 0.7323232323232324, "acc_norm_stderr": 0.03154449888270286 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8393782383419689, "acc_stderr": 0.02649905770139746, "acc_norm": 0.8393782383419689, "acc_norm_stderr": 0.02649905770139746 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6128205128205129, "acc_stderr": 0.024697216930878937, "acc_norm": 0.6128205128205129, "acc_norm_stderr": 0.024697216930878937 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2962962962962963, "acc_stderr": 0.027840811495871934, "acc_norm": 0.2962962962962963, "acc_norm_stderr": 0.027840811495871934 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6680672268907563, "acc_stderr": 0.03058869701378364, "acc_norm": 0.6680672268907563, "acc_norm_stderr": 0.03058869701378364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8128440366972477, "acc_stderr": 0.016722684526200144, "acc_norm": 0.8128440366972477, "acc_norm_stderr": 0.016722684526200144 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4675925925925926, "acc_stderr": 0.03402801581358966, "acc_norm": 0.4675925925925926, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7794117647058824, "acc_stderr": 0.029102254389674082, "acc_norm": 0.7794117647058824, "acc_norm_stderr": 0.029102254389674082 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7637130801687764, "acc_stderr": 0.02765215314415927, "acc_norm": 0.7637130801687764, "acc_norm_stderr": 0.02765215314415927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7251908396946565, "acc_stderr": 0.039153454088478354, "acc_norm": 0.7251908396946565, "acc_norm_stderr": 0.039153454088478354 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990947, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990947 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.040191074725573483, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7361963190184049, "acc_stderr": 0.03462419931615623, "acc_norm": 0.7361963190184049, "acc_norm_stderr": 0.03462419931615623 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.44642857142857145, "acc_stderr": 0.04718471485219588, "acc_norm": 0.44642857142857145, "acc_norm_stderr": 0.04718471485219588 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8461538461538461, "acc_stderr": 0.023636873317489267, "acc_norm": 0.8461538461538461, "acc_norm_stderr": 0.023636873317489267 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7994891443167306, "acc_stderr": 0.014317653708594204, "acc_norm": 0.7994891443167306, "acc_norm_stderr": 0.014317653708594204 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6936416184971098, "acc_stderr": 0.024818350129436593, "acc_norm": 0.6936416184971098, "acc_norm_stderr": 0.024818350129436593 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.37094972067039106, "acc_stderr": 0.016155910721341763, "acc_norm": 0.37094972067039106, "acc_norm_stderr": 0.016155910721341763 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6895424836601307, "acc_stderr": 0.026493033225145894, "acc_norm": 0.6895424836601307, "acc_norm_stderr": 0.026493033225145894 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6752411575562701, "acc_stderr": 0.026596782287697043, "acc_norm": 0.6752411575562701, "acc_norm_stderr": 0.026596782287697043 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6728395061728395, "acc_stderr": 0.026105673861409818, "acc_norm": 0.6728395061728395, "acc_norm_stderr": 0.026105673861409818 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.450354609929078, "acc_stderr": 0.029680105565029036, "acc_norm": 0.450354609929078, "acc_norm_stderr": 0.029680105565029036 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4589308996088657, "acc_stderr": 0.012727084826799798, "acc_norm": 0.4589308996088657, "acc_norm_stderr": 0.012727084826799798 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6507352941176471, "acc_stderr": 0.028959755196824866, "acc_norm": 0.6507352941176471, "acc_norm_stderr": 0.028959755196824866 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6013071895424836, "acc_stderr": 0.019808281317449848, "acc_norm": 0.6013071895424836, "acc_norm_stderr": 0.019808281317449848 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6938775510204082, "acc_stderr": 0.029504896454595957, "acc_norm": 0.6938775510204082, "acc_norm_stderr": 0.029504896454595957 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7960199004975125, "acc_stderr": 0.02849317624532607, "acc_norm": 0.7960199004975125, "acc_norm_stderr": 0.02849317624532607 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.81, "acc_stderr": 0.03942772444036625, "acc_norm": 0.81, "acc_norm_stderr": 0.03942772444036625 }, "harness|hendrycksTest-virology|5": { "acc": 0.536144578313253, "acc_stderr": 0.038823108508905954, "acc_norm": 0.536144578313253, "acc_norm_stderr": 0.038823108508905954 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8011695906432749, "acc_stderr": 0.030611116557432528, "acc_norm": 0.8011695906432749, "acc_norm_stderr": 0.030611116557432528 }, "harness|truthfulqa:mc|0": { "mc1": 0.3011015911872705, "mc1_stderr": 0.016058999026100612, "mc2": 0.45630119540652325, "mc2_stderr": 0.015576968550952539 }, "harness|winogrande|5": { "acc": 0.7482241515390686, "acc_stderr": 0.01219848910025978 }, "harness|drop|3": { "em": 0.2489513422818792, "em_stderr": 0.004428237695563259, "f1": 0.31078754194630925, "f1_stderr": 0.004386404073454112 }, "harness|gsm8k|5": { "acc": 0.19711902956785443, "acc_stderr": 0.01095802163030063 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
Hitochu/hate-speech-fr
--- license: wtfpl ---
LahiruLowe/flan2021_filtered_2pertask
--- dataset_info: features: - name: inputs dtype: string - name: targets dtype: string - name: task_source dtype: string - name: task_name dtype: string - name: template_type dtype: string splits: - name: train num_bytes: 134685 num_examples: 140 download_size: 0 dataset_size: 134685 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "flan2021_filtered_2pertask" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Tom9000/wikitext-csv
--- license: apache-2.0 --- This one is not entirely identical to default "wiki.train.raw" dataset, used with llama.cpp, so instead of this one, get the recomended one from here: https://huggingface.co/datasets/ggml-org/ci
irds/gov
--- pretty_name: '`gov`' viewer: false source_datasets: [] task_categories: - text-retrieval --- # Dataset Card for `gov` The `gov` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/gov#gov). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,247,753 This dataset is used by: [`gov_trec-web-2002`](https://huggingface.co/datasets/irds/gov_trec-web-2002), [`gov_trec-web-2002_named-page`](https://huggingface.co/datasets/irds/gov_trec-web-2002_named-page), [`gov_trec-web-2003`](https://huggingface.co/datasets/irds/gov_trec-web-2003), [`gov_trec-web-2003_named-page`](https://huggingface.co/datasets/irds/gov_trec-web-2003_named-page), [`gov_trec-web-2004`](https://huggingface.co/datasets/irds/gov_trec-web-2004) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/gov', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
glaiveai/glaive-code-assistant
--- license: apache-2.0 size_categories: - 100K<n<1M --- # Glaive-code-assistant Glaive-code-assistant is a dataset of ~140k code problems and solutions generated using Glaive’s synthetic data generation platform. The data is intended to be used to make models act as code assistants, and so the data is structured in a QA format where the questions are worded similar to how real users will ask code related questions. The data has ~60% python samples. To report any problems or suggestions in the data, join the [Glaive discord](https://discord.gg/fjQ4uf3yWD)
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_213
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1262316984.0 num_examples: 247902 download_size: 1290293667 dataset_size: 1262316984.0 --- # Dataset Card for "chunk_213" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
active-learning/to_label_samples
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 1391.0870983935743 num_examples: 5 download_size: 3878 dataset_size: 1391.0870983935743 --- # Dataset Card for "to_label_samples" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pharaouk/SPP_30K_reasoning_tasks
--- dataset_info: features: - name: type dtype: int64 - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 44253001 num_examples: 89898 download_size: 10073876 dataset_size: 44253001 task_categories: - text-generation - conversational - text2text-generation language: - en tags: - code - python - reasoning pretty_name: SPP python reasoning tasks --- # Dataset Card for "SPP_30K_verified_tasks" ### Dataset Summary This is an augmented version of the [Synthetic Python Problems(SPP) Dataset](https://huggingface.co/datasets/wuyetao/spp). This dataset has been generated from the subset of the data has been de-duplicated and verified using a Python interpreter. (SPP_30k_verified.jsonl). The original dataset contains small Python functions that include a docstring with a small description of what the function does and some calling examples for the function. The current dataset includes three different tasks: - Type 1: give the code as input and ask the model to generate some example calls along the expected return value - Type 2: give the description and the example calling calls and ask the model to write the function. - Type 3: give the the function and the example calls without the expected value as input and ask the model to write what the function should return. For example: ```python def lowest_prime_factor(n): """ Find the smallest prime factor of a given number. Examples: >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 """ smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest ``` First task: ``` INSTRUCTION: Execute the following function with several example calls, showing the values returned in each case. INPUT: def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest OUTPUT: Sure, here you have several example calls along the return values they will generate. """ >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 """ ``` Second task: ``` INSTRUCTION: Follow the task given and refer to the provided calling examples as you create the function's body. INPUT: Find the smallest prime factor of a given number. Examples: >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 OUTPUT: Of course, here's a function implementation based on the provided calling examples and task. """ python def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest """ ``` Third task: ``` INSTRUCTION: Examine the given function and the accompanying instance function calls, and outline the specific output generated by each call. INPUT: def lowest_prime_factor(n): smallest = 2 while(n % smallest!= 0): smallest+=1 return smallest Function calls: >>> lowest_prime_factor(12) >>> lowest_prime_factor(14) >>> lowest_prime_factor(8) OUTPUT: Sure thing, here you have the specific output generated by each function call. >>> lowest_prime_factor(12) 2 >>> lowest_prime_factor(14) 2 >>> lowest_prime_factor(8) 2 ``` ### Languages The dataset is in English. ### Data Splits There are no splits (Only training). ## Dataset Creation August of 2023 ### Curation Rationale This dataset was created to improve the Python 3 reasoning/understanding capabilities of LLMs. ### Source Data The source dataset can be found at [Synthetic Python Problems(SPP) Dataset](https://huggingface.co/datasets/wuyetao/spp). ### Annotations The dataset includes an instruction, input, output and type columns. The type colum indicates the type of task (from 1 to 3). #### Annotation process The responses were generated parsing the docstrings of the functions.
mask-distilled-one-sec-cv12/chunk_103
--- dataset_info: features: - name: logits sequence: float32 - name: mfcc sequence: sequence: float64 splits: - name: train num_bytes: 1389107784 num_examples: 272802 download_size: 1394978083 dataset_size: 1389107784 --- # Dataset Card for "chunk_103" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
irds/mr-tydi_id
--- pretty_name: '`mr-tydi/id`' viewer: false source_datasets: [] task_categories: - text-retrieval --- # Dataset Card for `mr-tydi/id` The `mr-tydi/id` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/id). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=1,469,399 - `queries` (i.e., topics); count=6,977 - `qrels`: (relevance assessments); count=7,087 This dataset is used by: [`mr-tydi_id_dev`](https://huggingface.co/datasets/irds/mr-tydi_id_dev), [`mr-tydi_id_test`](https://huggingface.co/datasets/irds/mr-tydi_id_test), [`mr-tydi_id_train`](https://huggingface.co/datasets/irds/mr-tydi_id_train) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/mr-tydi_id', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} queries = load_dataset('irds/mr-tydi_id', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mr-tydi_id', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Zhang2021MrTyDi, title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval}, author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin}, year={2021}, journal={arXiv:2108.08787}, } @article{Clark2020TyDiQa, title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}, year={2020}, journal={Transactions of the Association for Computational Linguistics} } ```
azain/LibriTTS-100-samples
--- dataset_info: features: - name: audio dtype: audio - name: transcript dtype: string - name: speaker_id dtype: string splits: - name: train num_bytes: 25977218.437238492 num_examples: 100 download_size: 25737872 dataset_size: 25977218.437238492 configs: - config_name: default data_files: - split: train path: data/train-* ---
chiHang/clothes_dataset
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 230456480.0 num_examples: 64 download_size: 226942310 dataset_size: 230456480.0 --- # Dataset Card for "clothes_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davidgaofc/Shadow_inout
--- license: mit dataset_info: features: - name: Text dtype: string - name: Label dtype: int64 splits: - name: train num_bytes: 799895 num_examples: 1640 download_size: 348696 dataset_size: 799895 configs: - config_name: default data_files: - split: train path: data/train-* ---
andreweduffy/testy
--- dataset_info: features: - name: filename dtype: string - name: document_id dtype: string - name: passage_id dtype: string - name: text dtype: string splits: - name: train num_bytes: 11428076 num_examples: 6578 download_size: 5198408 dataset_size: 11428076 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "testy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
open-llm-leaderboard/details_KaeriJenti__kaori-34b-v4
--- pretty_name: Evaluation run of KaeriJenti/kaori-34b-v4 dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [KaeriJenti/kaori-34b-v4](https://huggingface.co/KaeriJenti/kaori-34b-v4) on the\ \ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KaeriJenti__kaori-34b-v4\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2023-12-23T20:41:54.627172](https://huggingface.co/datasets/open-llm-leaderboard/details_KaeriJenti__kaori-34b-v4/blob/main/results_2023-12-23T20-41-54.627172.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2562435688049368,\n\ \ \"acc_stderr\": 0.03087677995486888,\n \"acc_norm\": 0.25622099120034325,\n\ \ \"acc_norm_stderr\": 0.03166775316506421,\n \"mc1\": 0.2864137086903305,\n\ \ \"mc1_stderr\": 0.015826142439502346,\n \"mc2\": 0.49462441219025927,\n\ \ \"mc2_stderr\": 0.016011015086112988\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.189419795221843,\n \"acc_stderr\": 0.011450705115910769,\n\ \ \"acc_norm\": 0.23890784982935154,\n \"acc_norm_stderr\": 0.012461071376316614\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.27394941246763593,\n\ \ \"acc_stderr\": 0.004450718673552667,\n \"acc_norm\": 0.2896833300139414,\n\ \ \"acc_norm_stderr\": 0.004526883021027624\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \ \ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2740740740740741,\n\ \ \"acc_stderr\": 0.03853254836552003,\n \"acc_norm\": 0.2740740740740741,\n\ \ \"acc_norm_stderr\": 0.03853254836552003\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.2236842105263158,\n \"acc_stderr\": 0.033911609343436025,\n\ \ \"acc_norm\": 0.2236842105263158,\n \"acc_norm_stderr\": 0.033911609343436025\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.23,\n\ \ \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \ \ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.025447863825108594,\n\ \ \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.025447863825108594\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2222222222222222,\n\ \ \"acc_stderr\": 0.03476590104304136,\n \"acc_norm\": 0.2222222222222222,\n\ \ \"acc_norm_stderr\": 0.03476590104304136\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \ \ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n\ \ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.3,\n\ \ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \ \ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \ \ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\ \ \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n\ \ \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n\ \ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n\ \ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.30638297872340425,\n \"acc_stderr\": 0.030135906478517563,\n\ \ \"acc_norm\": 0.30638297872340425,\n \"acc_norm_stderr\": 0.030135906478517563\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.20175438596491227,\n\ \ \"acc_stderr\": 0.037752050135836386,\n \"acc_norm\": 0.20175438596491227,\n\ \ \"acc_norm_stderr\": 0.037752050135836386\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.25517241379310346,\n \"acc_stderr\": 0.03632984052707842,\n\ \ \"acc_norm\": 0.25517241379310346,\n \"acc_norm_stderr\": 0.03632984052707842\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.2671957671957672,\n \"acc_stderr\": 0.02278967314577657,\n \"\ acc_norm\": 0.2671957671957672,\n \"acc_norm_stderr\": 0.02278967314577657\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\ \ \"acc_stderr\": 0.0404061017820884,\n \"acc_norm\": 0.2857142857142857,\n\ \ \"acc_norm_stderr\": 0.0404061017820884\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \ \ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\ : 0.3161290322580645,\n \"acc_stderr\": 0.02645087448904277,\n \"\ acc_norm\": 0.3161290322580645,\n \"acc_norm_stderr\": 0.02645087448904277\n\ \ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\ : 0.19704433497536947,\n \"acc_stderr\": 0.027986724666736205,\n \"\ acc_norm\": 0.19704433497536947,\n \"acc_norm_stderr\": 0.027986724666736205\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.24,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\"\ : 0.24,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.032876667586034886,\n\ \ \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.032876667586034886\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"\ acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.37823834196891193,\n \"acc_stderr\": 0.03499807276193339,\n\ \ \"acc_norm\": 0.37823834196891193,\n \"acc_norm_stderr\": 0.03499807276193339\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.2128205128205128,\n \"acc_stderr\": 0.020752423722128016,\n\ \ \"acc_norm\": 0.2128205128205128,\n \"acc_norm_stderr\": 0.020752423722128016\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \ \ \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.21428571428571427,\n \"acc_stderr\": 0.026653531596715477,\n\ \ \"acc_norm\": 0.21428571428571427,\n \"acc_norm_stderr\": 0.026653531596715477\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\ acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.3155963302752294,\n \"acc_stderr\": 0.019926117513869666,\n \"\ acc_norm\": 0.3155963302752294,\n \"acc_norm_stderr\": 0.019926117513869666\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.24074074074074073,\n \"acc_stderr\": 0.0291575221846056,\n \"\ acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0291575221846056\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.30392156862745096,\n \"acc_stderr\": 0.03228210387037892,\n \"\ acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.03228210387037892\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.22362869198312235,\n \"acc_stderr\": 0.027123298205229972,\n \ \ \"acc_norm\": 0.22362869198312235,\n \"acc_norm_stderr\": 0.027123298205229972\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.24663677130044842,\n\ \ \"acc_stderr\": 0.028930413120910894,\n \"acc_norm\": 0.24663677130044842,\n\ \ \"acc_norm_stderr\": 0.028930413120910894\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.2366412213740458,\n \"acc_stderr\": 0.03727673575596918,\n\ \ \"acc_norm\": 0.2366412213740458,\n \"acc_norm_stderr\": 0.03727673575596918\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.3140495867768595,\n \"acc_stderr\": 0.042369647530410184,\n \"\ acc_norm\": 0.3140495867768595,\n \"acc_norm_stderr\": 0.042369647530410184\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n\ \ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n\ \ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.2883435582822086,\n \"acc_stderr\": 0.03559039531617342,\n\ \ \"acc_norm\": 0.2883435582822086,\n \"acc_norm_stderr\": 0.03559039531617342\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.23214285714285715,\n\ \ \"acc_stderr\": 0.04007341809755805,\n \"acc_norm\": 0.23214285714285715,\n\ \ \"acc_norm_stderr\": 0.04007341809755805\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.27184466019417475,\n \"acc_stderr\": 0.044052680241409216,\n\ \ \"acc_norm\": 0.27184466019417475,\n \"acc_norm_stderr\": 0.044052680241409216\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19230769230769232,\n\ \ \"acc_stderr\": 0.025819233256483706,\n \"acc_norm\": 0.19230769230769232,\n\ \ \"acc_norm_stderr\": 0.025819233256483706\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27330779054916987,\n\ \ \"acc_stderr\": 0.015936681062628556,\n \"acc_norm\": 0.27330779054916987,\n\ \ \"acc_norm_stderr\": 0.015936681062628556\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.02344582627654554,\n\ \ \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.02344582627654554\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2636871508379888,\n\ \ \"acc_stderr\": 0.014736926383761973,\n \"acc_norm\": 0.2636871508379888,\n\ \ \"acc_norm_stderr\": 0.014736926383761973\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.2875816993464052,\n \"acc_stderr\": 0.02591780611714716,\n\ \ \"acc_norm\": 0.2875816993464052,\n \"acc_norm_stderr\": 0.02591780611714716\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2572347266881029,\n\ \ \"acc_stderr\": 0.024826171289250888,\n \"acc_norm\": 0.2572347266881029,\n\ \ \"acc_norm_stderr\": 0.024826171289250888\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.024288533637726095,\n\ \ \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.024288533637726095\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.24822695035460993,\n \"acc_stderr\": 0.025770015644290396,\n \ \ \"acc_norm\": 0.24822695035460993,\n \"acc_norm_stderr\": 0.025770015644290396\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23272490221642764,\n\ \ \"acc_stderr\": 0.010792595553888496,\n \"acc_norm\": 0.23272490221642764,\n\ \ \"acc_norm_stderr\": 0.010792595553888496\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.22426470588235295,\n \"acc_stderr\": 0.02533684856333236,\n\ \ \"acc_norm\": 0.22426470588235295,\n \"acc_norm_stderr\": 0.02533684856333236\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.2679738562091503,\n \"acc_stderr\": 0.017917974069594722,\n \ \ \"acc_norm\": 0.2679738562091503,\n \"acc_norm_stderr\": 0.017917974069594722\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2545454545454545,\n\ \ \"acc_stderr\": 0.04172343038705383,\n \"acc_norm\": 0.2545454545454545,\n\ \ \"acc_norm_stderr\": 0.04172343038705383\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.24897959183673468,\n \"acc_stderr\": 0.02768297952296023,\n\ \ \"acc_norm\": 0.24897959183673468,\n \"acc_norm_stderr\": 0.02768297952296023\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.1890547263681592,\n\ \ \"acc_stderr\": 0.027686913588013024,\n \"acc_norm\": 0.1890547263681592,\n\ \ \"acc_norm_stderr\": 0.027686913588013024\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \ \ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.26506024096385544,\n\ \ \"acc_stderr\": 0.03436024037944966,\n \"acc_norm\": 0.26506024096385544,\n\ \ \"acc_norm_stderr\": 0.03436024037944966\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.035650796707083106,\n\ \ \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.035650796707083106\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2864137086903305,\n\ \ \"mc1_stderr\": 0.015826142439502346,\n \"mc2\": 0.49462441219025927,\n\ \ \"mc2_stderr\": 0.016011015086112988\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.5722178374112076,\n \"acc_stderr\": 0.013905134013839957\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \ \ \"acc_stderr\": 0.0022675371022544905\n }\n}\n```" repo_url: https://huggingface.co/KaeriJenti/kaori-34b-v4 leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|arc:challenge|25_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-12-23T20-41-54.627172.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|gsm8k|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hellaswag|10_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-management|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-management|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-12-23T20-41-54.627172.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-international_law|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-management|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-marketing|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-sociology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-virology|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T20-41-54.627172.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|truthfulqa:mc|0_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-12-23T20-41-54.627172.parquet' - config_name: harness_winogrande_5 data_files: - split: 2023_12_23T20_41_54.627172 path: - '**/details_harness|winogrande|5_2023-12-23T20-41-54.627172.parquet' - split: latest path: - '**/details_harness|winogrande|5_2023-12-23T20-41-54.627172.parquet' - config_name: results data_files: - split: 2023_12_23T20_41_54.627172 path: - results_2023-12-23T20-41-54.627172.parquet - split: latest path: - results_2023-12-23T20-41-54.627172.parquet --- # Dataset Card for Evaluation run of KaeriJenti/kaori-34b-v4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [KaeriJenti/kaori-34b-v4](https://huggingface.co/KaeriJenti/kaori-34b-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_KaeriJenti__kaori-34b-v4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-23T20:41:54.627172](https://huggingface.co/datasets/open-llm-leaderboard/details_KaeriJenti__kaori-34b-v4/blob/main/results_2023-12-23T20-41-54.627172.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2562435688049368, "acc_stderr": 0.03087677995486888, "acc_norm": 0.25622099120034325, "acc_norm_stderr": 0.03166775316506421, "mc1": 0.2864137086903305, "mc1_stderr": 0.015826142439502346, "mc2": 0.49462441219025927, "mc2_stderr": 0.016011015086112988 }, "harness|arc:challenge|25": { "acc": 0.189419795221843, "acc_stderr": 0.011450705115910769, "acc_norm": 0.23890784982935154, "acc_norm_stderr": 0.012461071376316614 }, "harness|hellaswag|10": { "acc": 0.27394941246763593, "acc_stderr": 0.004450718673552667, "acc_norm": 0.2896833300139414, "acc_norm_stderr": 0.004526883021027624 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2740740740740741, "acc_stderr": 0.03853254836552003, "acc_norm": 0.2740740740740741, "acc_norm_stderr": 0.03853254836552003 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.2236842105263158, "acc_stderr": 0.033911609343436025, "acc_norm": 0.2236842105263158, "acc_norm_stderr": 0.033911609343436025 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2188679245283019, "acc_stderr": 0.025447863825108594, "acc_norm": 0.2188679245283019, "acc_norm_stderr": 0.025447863825108594 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2222222222222222, "acc_stderr": 0.03476590104304136, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.03476590104304136 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.22, "acc_stderr": 0.0416333199893227, "acc_norm": 0.22, "acc_norm_stderr": 0.0416333199893227 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.24, "acc_stderr": 0.04292346959909282, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909282 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.21965317919075145, "acc_stderr": 0.031568093627031744, "acc_norm": 0.21965317919075145, "acc_norm_stderr": 0.031568093627031744 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.20588235294117646, "acc_stderr": 0.04023382273617747, "acc_norm": 0.20588235294117646, "acc_norm_stderr": 0.04023382273617747 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.21, "acc_stderr": 0.040936018074033256, "acc_norm": 0.21, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.30638297872340425, "acc_stderr": 0.030135906478517563, "acc_norm": 0.30638297872340425, "acc_norm_stderr": 0.030135906478517563 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.20175438596491227, "acc_stderr": 0.037752050135836386, "acc_norm": 0.20175438596491227, "acc_norm_stderr": 0.037752050135836386 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.25517241379310346, "acc_stderr": 0.03632984052707842, "acc_norm": 0.25517241379310346, "acc_norm_stderr": 0.03632984052707842 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2671957671957672, "acc_stderr": 0.02278967314577657, "acc_norm": 0.2671957671957672, "acc_norm_stderr": 0.02278967314577657 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2857142857142857, "acc_stderr": 0.0404061017820884, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.0404061017820884 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.038612291966536934, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.3161290322580645, "acc_stderr": 0.02645087448904277, "acc_norm": 0.3161290322580645, "acc_norm_stderr": 0.02645087448904277 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.19704433497536947, "acc_stderr": 0.027986724666736205, "acc_norm": 0.19704433497536947, "acc_norm_stderr": 0.027986724666736205 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.24, "acc_stderr": 0.042923469599092816, "acc_norm": 0.24, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.23030303030303031, "acc_stderr": 0.032876667586034886, "acc_norm": 0.23030303030303031, "acc_norm_stderr": 0.032876667586034886 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.35353535353535354, "acc_stderr": 0.03406086723547153, "acc_norm": 0.35353535353535354, "acc_norm_stderr": 0.03406086723547153 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.37823834196891193, "acc_stderr": 0.03499807276193339, "acc_norm": 0.37823834196891193, "acc_norm_stderr": 0.03499807276193339 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2128205128205128, "acc_stderr": 0.020752423722128016, "acc_norm": 0.2128205128205128, "acc_norm_stderr": 0.020752423722128016 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3037037037037037, "acc_stderr": 0.02803792996911499, "acc_norm": 0.3037037037037037, "acc_norm_stderr": 0.02803792996911499 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.21428571428571427, "acc_stderr": 0.026653531596715477, "acc_norm": 0.21428571428571427, "acc_norm_stderr": 0.026653531596715477 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2847682119205298, "acc_stderr": 0.03684881521389023, "acc_norm": 0.2847682119205298, "acc_norm_stderr": 0.03684881521389023 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3155963302752294, "acc_stderr": 0.019926117513869666, "acc_norm": 0.3155963302752294, "acc_norm_stderr": 0.019926117513869666 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.0291575221846056, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.0291575221846056 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.30392156862745096, "acc_stderr": 0.03228210387037892, "acc_norm": 0.30392156862745096, "acc_norm_stderr": 0.03228210387037892 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.22362869198312235, "acc_stderr": 0.027123298205229972, "acc_norm": 0.22362869198312235, "acc_norm_stderr": 0.027123298205229972 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.24663677130044842, "acc_stderr": 0.028930413120910894, "acc_norm": 0.24663677130044842, "acc_norm_stderr": 0.028930413120910894 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2366412213740458, "acc_stderr": 0.03727673575596918, "acc_norm": 0.2366412213740458, "acc_norm_stderr": 0.03727673575596918 }, "harness|hendrycksTest-international_law|5": { "acc": 0.3140495867768595, "acc_stderr": 0.042369647530410184, "acc_norm": 0.3140495867768595, "acc_norm_stderr": 0.042369647530410184 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.2222222222222222, "acc_stderr": 0.040191074725573483, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.2883435582822086, "acc_stderr": 0.03559039531617342, "acc_norm": 0.2883435582822086, "acc_norm_stderr": 0.03559039531617342 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.23214285714285715, "acc_stderr": 0.04007341809755805, "acc_norm": 0.23214285714285715, "acc_norm_stderr": 0.04007341809755805 }, "harness|hendrycksTest-management|5": { "acc": 0.27184466019417475, "acc_stderr": 0.044052680241409216, "acc_norm": 0.27184466019417475, "acc_norm_stderr": 0.044052680241409216 }, "harness|hendrycksTest-marketing|5": { "acc": 0.19230769230769232, "acc_stderr": 0.025819233256483706, "acc_norm": 0.19230769230769232, "acc_norm_stderr": 0.025819233256483706 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.27330779054916987, "acc_stderr": 0.015936681062628556, "acc_norm": 0.27330779054916987, "acc_norm_stderr": 0.015936681062628556 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.2543352601156069, "acc_stderr": 0.02344582627654554, "acc_norm": 0.2543352601156069, "acc_norm_stderr": 0.02344582627654554 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2636871508379888, "acc_stderr": 0.014736926383761973, "acc_norm": 0.2636871508379888, "acc_norm_stderr": 0.014736926383761973 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.2875816993464052, "acc_stderr": 0.02591780611714716, "acc_norm": 0.2875816993464052, "acc_norm_stderr": 0.02591780611714716 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2572347266881029, "acc_stderr": 0.024826171289250888, "acc_norm": 0.2572347266881029, "acc_norm_stderr": 0.024826171289250888 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.25617283950617287, "acc_stderr": 0.024288533637726095, "acc_norm": 0.25617283950617287, "acc_norm_stderr": 0.024288533637726095 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.24822695035460993, "acc_stderr": 0.025770015644290396, "acc_norm": 0.24822695035460993, "acc_norm_stderr": 0.025770015644290396 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.23272490221642764, "acc_stderr": 0.010792595553888496, "acc_norm": 0.23272490221642764, "acc_norm_stderr": 0.010792595553888496 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.22426470588235295, "acc_stderr": 0.02533684856333236, "acc_norm": 0.22426470588235295, "acc_norm_stderr": 0.02533684856333236 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2679738562091503, "acc_stderr": 0.017917974069594722, "acc_norm": 0.2679738562091503, "acc_norm_stderr": 0.017917974069594722 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2545454545454545, "acc_stderr": 0.04172343038705383, "acc_norm": 0.2545454545454545, "acc_norm_stderr": 0.04172343038705383 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.24897959183673468, "acc_stderr": 0.02768297952296023, "acc_norm": 0.24897959183673468, "acc_norm_stderr": 0.02768297952296023 }, "harness|hendrycksTest-sociology|5": { "acc": 0.1890547263681592, "acc_stderr": 0.027686913588013024, "acc_norm": 0.1890547263681592, "acc_norm_stderr": 0.027686913588013024 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-virology|5": { "acc": 0.26506024096385544, "acc_stderr": 0.03436024037944966, "acc_norm": 0.26506024096385544, "acc_norm_stderr": 0.03436024037944966 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3157894736842105, "acc_stderr": 0.035650796707083106, "acc_norm": 0.3157894736842105, "acc_norm_stderr": 0.035650796707083106 }, "harness|truthfulqa:mc|0": { "mc1": 0.2864137086903305, "mc1_stderr": 0.015826142439502346, "mc2": 0.49462441219025927, "mc2_stderr": 0.016011015086112988 }, "harness|winogrande|5": { "acc": 0.5722178374112076, "acc_stderr": 0.013905134013839957 }, "harness|gsm8k|5": { "acc": 0.006823351023502654, "acc_stderr": 0.0022675371022544905 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
linhqyy/Zalo_Corpus_3
--- dataset_info: features: - name: audio dtype: audio - name: text dtype: string splits: - name: train num_bytes: 10832798885.816 num_examples: 98012 - name: test num_bytes: 2682493085.088 num_examples: 24503 download_size: 13425278612 dataset_size: 13515291970.904 --- # Dataset Card for "Zalo_Corpus_3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FaalSa/dataF
--- dataset_info: features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: item_id dtype: string - name: feat_static_cat sequence: uint64 splits: - name: train num_bytes: 57629 num_examples: 1 - name: validation num_bytes: 58109 num_examples: 1 - name: test num_bytes: 58589 num_examples: 1 download_size: 12645 dataset_size: 174327 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
raygx/NepCov19TweetsPlus2
--- dataset_info: features: - name: Sentiment dtype: int64 - name: Sentences dtype: string splits: - name: train num_bytes: 15287354 num_examples: 39432 download_size: 0 dataset_size: 15287354 --- # Dataset Card for "NepCov19TweetsPlus2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sheik21/Mc-Kevin
--- license: openrail ---
HydraLM/partitioned_v2_standardized_015
--- dataset_info: features: - name: message dtype: string - name: message_type dtype: string - name: message_id dtype: int64 - name: conversation_id dtype: int64 - name: dataset_id dtype: string - name: unique_conversation_id dtype: string splits: - name: train num_bytes: 109168438.408738 num_examples: 213571 download_size: 55155746 dataset_size: 109168438.408738 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "partitioned_v2_standardized_015" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
joey234/mmlu-world_religions-neg-prepend
--- dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: negate_openai_prompt struct: - name: content dtype: string - name: role dtype: string - name: neg_question dtype: string - name: fewshot_context dtype: string - name: ori_prompt dtype: string - name: neg_prompt dtype: string - name: fewshot_context_neg dtype: string - name: fewshot_context_ori dtype: string splits: - name: dev num_bytes: 4859 num_examples: 5 - name: test num_bytes: 731696 num_examples: 171 download_size: 114166 dataset_size: 736555 configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* --- # Dataset Card for "mmlu-world_religions-neg-prepend" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jlbaker361/flickr_humans_dim_128_40k
--- dataset_info: features: - name: image dtype: image - name: split dtype: string - name: src dtype: string - name: style dtype: string splits: - name: train num_bytes: 1173267779.0 num_examples: 40000 download_size: 1171711830 dataset_size: 1173267779.0 --- # Dataset Card for "flickr_humans_dim_128_40k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CShorten/ML-ArXiv-Papers
--- license: afl-3.0 --- This dataset contains the subset of ArXiv papers with the "cs.LG" tag to indicate the paper is about Machine Learning. The core dataset is filtered from the full ArXiv dataset hosted on Kaggle: https://www.kaggle.com/datasets/Cornell-University/arxiv. The original dataset contains roughly 2 million papers. This dataset contains roughly 100,000 papers following the category filtering. The dataset is maintained by with requests to the ArXiv API. The current iteration of the dataset only contains the title and abstract of the paper. The ArXiv dataset contains additional features that we may look to include in future releases. We have highlighted the top two features on the roadmap for integration: <ul> <li> <b>authors</b> </li> <li> <b>update_date</b> </li> <li> Submitter </li> <li> Comments </li> <li> Journal-ref </li> <li> doi </li> <li> report-no </li> <li> categories </li> <li> license </li> <li> versions </li> <li> authors_parsed </li> </ul>
shidowake/oasst1-chat-ja-subset-from-kunishou
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: train num_bytes: 17699396 num_examples: 9659 download_size: 9016967 dataset_size: 17699396 configs: - config_name: default data_files: - split: train path: data/train-* --- # Description Subset of the original dataset extracted for my own purpose. # Original Dataset - [kunishou/oasst1-chat-44k-ja · Datasets at Hugging Face](https://huggingface.co/datasets/kunishou/oasst1-chat-44k-ja) # License - [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
dhuynh95/Magicoder-Evol-Instruct-250-CodeLlama-70b-tokenized-0.5-Special-Token
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 563956 num_examples: 250 download_size: 300984 dataset_size: 563956 configs: - config_name: default data_files: - split: train path: data/train-* ---
CyberHarem/makoto_bluearchive
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of makoto/羽沼マコト/真琴 (Blue Archive) This is the dataset of makoto/羽沼マコト/真琴 (Blue Archive), containing 164 images and their tags. The core tags of this character are `horns, long_hair, grey_hair, demon_horns, black_horns, halo, multiple_horns, pointy_ears, breasts, hair_over_one_eye, grey_eyes, large_breasts, blue_eyes, hat, earrings, peaked_cap, very_long_hair, red_halo`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 164 | 271.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makoto_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 1200 | 164 | 227.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makoto_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 415 | 456.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/makoto_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/makoto_bluearchive', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 42 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, elbow_gloves, official_alternate_costume, cleavage, solo, jewelry, bare_shoulders, black_dress, looking_at_viewer, black_gloves, simple_background, white_background, smile, multicolored_hair, ponytail, closed_mouth, criss-cross_halter | | 1 | 11 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_coat, looking_at_viewer, solo, black_headwear, black_necktie, coat_on_shoulders, collared_shirt, long_sleeves, military_uniform, simple_background, white_background, white_gloves, black_belt, white_shirt, black_pantyhose, grin, hand_on_own_hip, black_jacket, black_skirt, cowboy_shot | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | elbow_gloves | official_alternate_costume | cleavage | solo | jewelry | bare_shoulders | black_dress | looking_at_viewer | black_gloves | simple_background | white_background | smile | multicolored_hair | ponytail | closed_mouth | criss-cross_halter | black_coat | black_headwear | black_necktie | coat_on_shoulders | collared_shirt | long_sleeves | military_uniform | white_gloves | black_belt | white_shirt | black_pantyhose | grin | hand_on_own_hip | black_jacket | black_skirt | cowboy_shot | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------------------|:-----------|:-------|:----------|:-----------------|:--------------|:--------------------|:---------------|:--------------------|:-------------------|:--------|:--------------------|:-----------|:---------------|:---------------------|:-------------|:-----------------|:----------------|:--------------------|:-----------------|:---------------|:-------------------|:---------------|:-------------|:--------------|:------------------|:-------|:------------------|:---------------|:--------------|:--------------| | 0 | 42 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 11 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | X | | | | X | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
Bailey24/tangram-data
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 3638532.0 num_examples: 316 download_size: 3354349 dataset_size: 3638532.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "tangram-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mswhite/artwork
--- license: artistic-2.0 ---
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-60000
--- dataset_info: features: - name: tables sequence: string - name: table_names sequence: string - name: query dtype: string - name: answer dtype: string - name: source dtype: string - name: target dtype: string - name: source_latex dtype: string - name: target_latex dtype: string - name: source_html dtype: string - name: target_html dtype: string - name: source_markdown dtype: string - name: target_markdown dtype: string splits: - name: train num_bytes: 15922887723 num_examples: 2500 download_size: 3033180169 dataset_size: 15922887723 configs: - config_name: default data_files: - split: train path: data/train-* ---
m-a-p/MAP-CC
--- non-commercial licence: CC BY-NC-ND 4.0 license: cc-by-nc-nd-4.0 --- # MAP-CC [**🌐 Homepage**](https://chinese-tiny-llm.github.io) | [**🤗 MAP-CC**](https://huggingface.co/datasets/m-a-p/MAP-CC) | [**🤗 CHC-Bench**](https://huggingface.co/datasets/m-a-p/CHC-Bench) | [**🤗 CT-LLM**](https://huggingface.co/collections/m-a-p/chinese-tiny-llm-660d0133dff6856f94ce0fc6) | [**📖 arXiv**](https://arxiv.org/abs/2404.04167) | [**GitHub**](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM) An open-source Chinese pretraining dataset with a scale of 800 billion tokens, offering the NLP community high-quality Chinese pretraining data. ## Disclaimer This model, developed for academic purposes, employs rigorously compliance-checked training data to uphold the highest standards of integrity and compliance. Despite our efforts, the inherent complexities of data and the broad spectrum of model applications prevent us from ensuring absolute accuracy or appropriateness of the model outputs in every scenario. It is essential to highlight that our model and its associated training data are intended solely for scholarly research. We explicitly disclaim any liability for problems that may arise from improper use, interpretation errors, unlawful activities, the dissemination of false information, or any data security issues related to the utilization of our model or its training data. We strongly encourage users to report any concerns related to data misuse, security breaches, or potential infringement issues directly to us for immediate investigation and resolution. ### Contact: {`ge.zhang@uwaterloo.ca; duxinrun2000@gmail.com`} Our commitment to responsible data sharing and the security of our academic tools is paramount. We thank you for your cooperation in maintaining the ethical use of this technology. ## License The MAP-CC Dataset is made available under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License ([CC BY-NC-ND 4.0](LICENSE)). By using the MAP-CC Dataset, you accept and agree to be bound by the terms and conditions of the CC BY-NC-ND 4.0 License. This license allows users to share (copy and redistribute the material in any medium or format) the MAP-CC Dataset for non-commercial purposes only, and with no modifications or derivatives, as long as proper attribution is given to the creators. For further details, please refer to the [LICENSE](LICENSE) file. We chose the CC BY-NC-ND 4.0 License for the MAP-CC Dataset to facilitate academic and educational use, promoting the spread of knowledge while protecting the work of the creators from unauthorized commercial use or modification. ## Usage Instructions After downloading the parts of the dataset, you can concatenate them into a single file for each split of the dataset using the following command in a UNIX-like terminal: ```bash cat [split].gz.part* > [split].gz ``` Replace [split] with the name of the dataset component you wish to merge (zh-cc, zh-baike, zh-papers, zh-books, or zh-others). After merging, decompress the .gz file to access the dataset's content. ## Dataset Composition The dataset consists of several components, each originating from different sources and serving various purposes in language modeling and processing. Below is a brief overview of each component: <p> <img src="data-ratio.png" style="float: right; width: 400px; margin-left: 10px;"> <strong>zh-cc (Chinese Common Crawl)</strong><br> Extracts from the Common Crawl project specifically filtered for Chinese content. This component is rich in diverse internet text, ranging from websites, blogs, news articles, and more.<br><br> <strong>zh-baike (Chinese Encyclopedias)</strong><br> A collection of articles from various Chinese encyclopedias, similar to Wikipedia but including other encyclopedic sources as well.<br><br> <strong>zh-papers (Chinese Academic Papers)</strong><br> This component consists of academic and research papers published in Chinese. It covers a wide range of disciplines and offers technical, domain-specific language.<br><br> <strong>zh-books (Chinese Books)</strong><br> Comprises texts extracted from books published in Chinese. This includes literature, non-fiction, textbooks, and more.<br><br> <strong>zh-others</strong><br> This category is a collection of miscellaneous texts, notably including a substantial amount of QA (Question and Answer) data, alongside a variety of other texts.<br> </p> ## Citation ``` @misc{du2024chinese, title={Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model}, author={Xinrun Du and Zhouliang Yu and Songyang Gao and Ding Pan and Yuyang Cheng and Ziyang Ma and Ruibin Yuan and Xingwei Qu and Jiaheng Liu and Tianyu Zheng and Xinchen Luo and Guorui Zhou and Binhang Yuan and Wenhu Chen and Jie Fu and Ge Zhang}, year={2024}, eprint={2404.04167}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
vtiyyal1/AskDocsEmpathy_4k_dpo
--- dataset_info: features: - name: prompt dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 11546422.0 num_examples: 4124 download_size: 4598186 dataset_size: 11546422.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
Nexdata/39993_Images_OCR_Data_of_Internet_Image
--- license: cc-by-nc-nd-4.0 --- ## Description 39,993 Images – OCR Data of Internet Image. The collecting scenes of this dataset include subtitle, advertisement, cellphone screenshot, comic, emoticon, poster, magazine cover, etc. The language distribution is Chinese, English (a few). For annotation, line-level rectangular bounding box annotation and transcription for the texts were adopted for the internet images (column-level quadrilateral bounding box annotation and transcription for the texts were adopted for small amount of data). The dataset can be used for OCR tasks of internet images. For more details, please refer to the link: https://www.nexdata.ai/dataset/171?source=Huggingface ## Data size 39,993 images, 227,910 bounding boxes ## Collecting environment including subtitle, advertisement, cellphone screenshot, comic, emoticon, poster, magazine cover etc. ## Data diversity including multiple types of internet images ## Language distribution Chinese, English (a few) ## Data format the image data format is .jpg, the annotation file format is .json ## Annotation content line-level rectangular bounding box annotation and transcription for the texts (column-level quadrilateral bounding box annotation and transcription for the texts were adopted for small amount of data) ## Accuracy the error bound of each vertex of a rectangular bounding box is within 5 pixels, which is a qualified annotation, the accuracy of bounding boxes is not less than 97%; the texts transcription accuracy is not less than 97% # Licensing Information Commercial License
open-llm-leaderboard/details_monology__openinstruct-mistral-7b
--- pretty_name: Evaluation run of monology/openinstruct-mistral-7b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [monology/openinstruct-mistral-7b](https://huggingface.co/monology/openinstruct-mistral-7b)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 1 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_monology__openinstruct-mistral-7b\"\ ,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\ \ are the [latest results from run 2023-12-02T14:37:45.308018](https://huggingface.co/datasets/open-llm-leaderboard/details_monology__openinstruct-mistral-7b/blob/main/results_2023-12-02T14-37-45.308018.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5049279757391963,\n\ \ \"acc_stderr\": 0.013771815775470575\n },\n \"harness|gsm8k|5\":\ \ {\n \"acc\": 0.5049279757391963,\n \"acc_stderr\": 0.013771815775470575\n\ \ }\n}\n```" repo_url: https://huggingface.co/monology/openinstruct-mistral-7b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_gsm8k_5 data_files: - split: 2023_12_02T14_37_45.308018 path: - '**/details_harness|gsm8k|5_2023-12-02T14-37-45.308018.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2023-12-02T14-37-45.308018.parquet' - config_name: results data_files: - split: 2023_12_02T14_37_45.308018 path: - results_2023-12-02T14-37-45.308018.parquet - split: latest path: - results_2023-12-02T14-37-45.308018.parquet --- # Dataset Card for Evaluation run of monology/openinstruct-mistral-7b ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/monology/openinstruct-mistral-7b - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [monology/openinstruct-mistral-7b](https://huggingface.co/monology/openinstruct-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_monology__openinstruct-mistral-7b", "harness_gsm8k_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-02T14:37:45.308018](https://huggingface.co/datasets/open-llm-leaderboard/details_monology__openinstruct-mistral-7b/blob/main/results_2023-12-02T14-37-45.308018.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5049279757391963, "acc_stderr": 0.013771815775470575 }, "harness|gsm8k|5": { "acc": 0.5049279757391963, "acc_stderr": 0.013771815775470575 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
bigcode/guanaco-xp3x-commits-instruct
--- dataset_info: features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 28401367.0 num_examples: 22561 - name: test num_bytes: 1307476.0 num_examples: 1029 download_size: 17258786 dataset_size: 29708843.0 --- # Dataset Card for "all-for-one" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CyberHarem/js_9_girlsfrontline
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of js_9/JS9/JS9 (Girls' Frontline) This is the dataset of js_9/JS9/JS9 (Girls' Frontline), containing 33 images and their tags. The core tags of this character are `brown_hair, long_hair, red_eyes, breasts, hairband, bangs, hair_ornament, mole_under_eye, mole, large_breasts`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 33 | 40.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js_9_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 33 | 24.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js_9_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 82 | 50.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js_9_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 33 | 35.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js_9_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 82 | 64.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/js_9_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/js_9_girlsfrontline', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, single_mechanical_arm, blue_jacket, scarf, blush, gloves, headset, simple_background, white_background, holding, open_mouth, single_thighhigh, uneven_legwear, upper_body | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, eyewear_on_head, mechanical_arms, solo, sunglasses, white_bikini, looking_at_viewer, navel, official_alternate_costume, black_shorts, cleavage, standing, watermelon_bar, belt, collarbone, twintails | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | single_mechanical_arm | blue_jacket | scarf | blush | gloves | headset | simple_background | white_background | holding | open_mouth | single_thighhigh | uneven_legwear | upper_body | eyewear_on_head | mechanical_arms | sunglasses | white_bikini | navel | official_alternate_costume | black_shorts | cleavage | standing | watermelon_bar | belt | collarbone | twintails | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:------------------------|:--------------|:--------|:--------|:---------|:----------|:--------------------|:-------------------|:----------|:-------------|:-------------------|:-----------------|:-------------|:------------------|:------------------|:-------------|:---------------|:--------|:-----------------------------|:---------------|:-----------|:-----------|:-----------------|:-------|:-------------|:------------| | 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
parsee-ai/invoices-example
--- license: mit task_categories: - question-answering language: - en - de tags: - llm - rag - document processing size_categories: - n<1K --- # Inoices Sample Dataset This is a sample dataset generated on app.parsee.ai for invoices. The goal was to evaluate different LLMs on this RAG task using the Parsee evaluation tools. A full study can be found here: https://github.com/parsee-ai/parsee-datasets/blob/main/datasets/invoices/parsee-loader/README.md parsee-core version used: 0.1.3.11 This dataset was created on the basis of 15 sample invoices (PDF files). All PDF files are publicly accessible on parsee.ai, to access them copy the "source_identifier" (first column) and paste it in this URL (replace '{SOURCE_IDENTIFIER}' with the actual identifier): https://app.parsee.ai/documents/view/{SOURCE_IDENTIFIER} So for example: https://app.parsee.ai/documents/view/1fd7fdbd88d78aa6e80737b8757290b78570679fbb926995db362f38a0d161ea
bvand086/ata_guidelines
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 803456 num_examples: 847 download_size: 435112 dataset_size: 803456 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "ata_guidelines" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
vietgpt-archive/CC-MAIN-2020-10
--- dataset_info: features: - name: raw_text dtype: string - name: meta struct: - name: subfile dtype: string - name: url dtype: string splits: - name: train num_bytes: 3526377388 num_examples: 670453 download_size: 1591962832 dataset_size: 3526377388 --- # Dataset Card for "CC-MAIN-2020-10" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
atharvapawar/Sorted_Diversevul_llama2_dataset
--- license: mit ---
CyberHarem/power_chainsawman
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of Power/パワー (Chainsaw Man) This is the dataset of Power/パワー (Chainsaw Man), containing 466 images and their tags. The core tags of this character are `horns, long_hair, red_horns, blonde_hair, symbol-shaped_pupils, cross-shaped_pupils, hair_between_eyes, yellow_eyes, demon_horns`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 466 | 321.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/power_chainsawman/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 1200 | 466 | 321.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/power_chainsawman/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 867 | 540.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/power_chainsawman/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/power_chainsawman', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, black_necktie, blue_jacket, closed_mouth, collared_shirt, hood, looking_at_viewer, solo, upper_body, white_shirt, off_shoulder, orange_eyes, sweatdrop | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, black_necktie, blue_jacket, collared_shirt, solo, white_shirt, cloudy_sky, outdoors, hood, upper_body, off_shoulder, pants | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | black_jacket, black_necktie, black_pants, collared_shirt, hand_on_own_hip, looking_at_viewer, solo_focus, white_shirt, 1girl, 1boy | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, black_necktie, collared_shirt, open_mouth, sharp_teeth, solo, white_shirt | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, black_necktie, collared_shirt, sharp_teeth, white_shirt, open_mouth, solo, suit, formal, black_jacket | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, black_jacket, black_necktie, collared_shirt, looking_at_viewer, sharp_teeth, solo, white_shirt, smile, open_mouth, upper_body, blood, indoors, orange_eyes, weapon | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, black_necktie, collared_shirt, sharp_teeth, white_shirt, solo, grin, jacket, looking_at_viewer, pink_hair | | 7 | 11 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blue_jacket, open_mouth, solo, white_shirt, black_necktie, pink_hair, simple_background, collared_shirt, white_background, black_pants, sharp_teeth, looking_at_viewer, red_eyes, hair_over_one_eye | | 8 | 11 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, sharp_teeth, solo, portrait, open_mouth, looking_at_viewer, smile, blush, parody, scene_reference | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1boy, 1girl, black_necktie, black_pants, collared_shirt, sharp_teeth, sweat, white_shirt, blue_jacket, clenched_teeth, on_back, looking_up, orange_eyes, crying, on_ground | | 10 | 6 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, hoodie, purple_jacket, sharp_teeth, open_mouth, white_shirt, 1boy, blush, open_clothes, smile, blurry, collarbone, looking_at_viewer, solo_focus | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_necktie | blue_jacket | closed_mouth | collared_shirt | hood | looking_at_viewer | solo | upper_body | white_shirt | off_shoulder | orange_eyes | sweatdrop | cloudy_sky | outdoors | pants | black_jacket | black_pants | hand_on_own_hip | solo_focus | 1boy | open_mouth | sharp_teeth | suit | formal | smile | blood | indoors | weapon | grin | jacket | pink_hair | simple_background | white_background | red_eyes | hair_over_one_eye | portrait | blush | parody | scene_reference | sweat | clenched_teeth | on_back | looking_up | crying | on_ground | hoodie | purple_jacket | open_clothes | blurry | collarbone | |----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:----------------|:--------------|:---------------|:-----------------|:-------|:--------------------|:-------|:-------------|:--------------|:---------------|:--------------|:------------|:-------------|:-----------|:--------|:---------------|:--------------|:------------------|:-------------|:-------|:-------------|:--------------|:-------|:---------|:--------|:--------|:----------|:---------|:-------|:---------|:------------|:--------------------|:-------------------|:-----------|:--------------------|:-----------|:--------|:---------|:------------------|:--------|:-----------------|:----------|:-------------|:---------|:------------|:---------|:----------------|:---------------|:---------|:-------------| | 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 9 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | | X | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | X | | X | | | X | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | X | | | X | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 9 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | X | | | X | | | X | | X | | | | | | | X | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | X | | X | X | X | X | | X | | | | | X | | | | | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 6 | 7 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | | | X | | X | X | | X | | | | | | | | | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | 7 | 11 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | X | | X | | X | X | | X | | | | | | | | X | | | | X | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | 8 | 11 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | | | X | X | | | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | X | X | | X | | | | | X | | X | | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | 10 | 6 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | | | | | | X | | | X | | | | | | | | | | X | X | X | X | | | X | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X |