datasetId
stringlengths
2
117
card
stringlengths
19
1.01M
ibivibiv/plantuml-training
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 1569689 num_examples: 972 download_size: 681556 dataset_size: 1569689 configs: - config_name: default data_files: - split: train path: data/train-* ---
faizalnf1800/gpt-generated-review-product
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: text dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 26004 num_examples: 166 - name: test num_bytes: 1475 num_examples: 9 download_size: 16196 dataset_size: 27479 ---
weiji14/clay_vector_embeddings
--- license: openrail ---
LeandreSassi/facades_DS
--- dataset_info: features: - name: image dtype: image splits: - name: train num_bytes: 52063902.628 num_examples: 3238 download_size: 51917865 dataset_size: 52063902.628 configs: - config_name: default data_files: - split: train path: data/train-* ---
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-open_orca_20w
--- pretty_name: Evaluation run of CHIH-HUNG/llama-2-13b-open_orca_20w dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [CHIH-HUNG/llama-2-13b-open_orca_20w](https://huggingface.co/CHIH-HUNG/llama-2-13b-open_orca_20w)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 61 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the agregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-open_orca_20w\"\ ,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\ \nThese are the [latest results from run 2023-08-30T09:16:35.193244](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-open_orca_20w/blob/main/results_2023-08-30T09%3A16%3A35.193244.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5638417257556474,\n\ \ \"acc_stderr\": 0.03429434698845683,\n \"acc_norm\": 0.5680270426638301,\n\ \ \"acc_norm_stderr\": 0.03427324810755194,\n \"mc1\": 0.3011015911872705,\n\ \ \"mc1_stderr\": 0.016058999026100612,\n \"mc2\": 0.4313573698064727,\n\ \ \"mc2_stderr\": 0.014673057614679777\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.5588737201365188,\n \"acc_stderr\": 0.014509747749064664,\n\ \ \"acc_norm\": 0.5989761092150171,\n \"acc_norm_stderr\": 0.014322255790719869\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6183031268671579,\n\ \ \"acc_stderr\": 0.004848099661619698,\n \"acc_norm\": 0.8251344353714399,\n\ \ \"acc_norm_stderr\": 0.0037907576465759014\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526066,\n \ \ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526066\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n\ \ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n\ \ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.5723684210526315,\n \"acc_stderr\": 0.04026097083296564,\n\ \ \"acc_norm\": 0.5723684210526315,\n \"acc_norm_stderr\": 0.04026097083296564\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\ \ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \ \ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.6377358490566037,\n \"acc_stderr\": 0.029582245128384303,\n\ \ \"acc_norm\": 0.6377358490566037,\n \"acc_norm_stderr\": 0.029582245128384303\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5694444444444444,\n\ \ \"acc_stderr\": 0.04140685639111502,\n \"acc_norm\": 0.5694444444444444,\n\ \ \"acc_norm_stderr\": 0.04140685639111502\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \ \ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \ \ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\ acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\"\ : 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \ \ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5144508670520231,\n\ \ \"acc_stderr\": 0.03810871630454764,\n \"acc_norm\": 0.5144508670520231,\n\ \ \"acc_norm_stderr\": 0.03810871630454764\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\ \ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\ \ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.4425531914893617,\n \"acc_stderr\": 0.03246956919789958,\n\ \ \"acc_norm\": 0.4425531914893617,\n \"acc_norm_stderr\": 0.03246956919789958\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21929824561403508,\n\ \ \"acc_stderr\": 0.03892431106518754,\n \"acc_norm\": 0.21929824561403508,\n\ \ \"acc_norm_stderr\": 0.03892431106518754\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n\ \ \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.3492063492063492,\n \"acc_stderr\": 0.024552292209342665,\n \"\ acc_norm\": 0.3492063492063492,\n \"acc_norm_stderr\": 0.024552292209342665\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\ \ \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.4126984126984127,\n\ \ \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \ \ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\ \ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6516129032258065,\n\ \ \"acc_stderr\": 0.027104826328100944,\n \"acc_norm\": 0.6516129032258065,\n\ \ \"acc_norm_stderr\": 0.027104826328100944\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.035107665979592154,\n\ \ \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.035107665979592154\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\ : 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.6787878787878788,\n \"acc_stderr\": 0.036462049632538115,\n\ \ \"acc_norm\": 0.6787878787878788,\n \"acc_norm_stderr\": 0.036462049632538115\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.7121212121212122,\n \"acc_stderr\": 0.03225883512300992,\n \"\ acc_norm\": 0.7121212121212122,\n \"acc_norm_stderr\": 0.03225883512300992\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\ \ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.5025641025641026,\n \"acc_stderr\": 0.025350672979412195,\n\ \ \"acc_norm\": 0.5025641025641026,\n \"acc_norm_stderr\": 0.025350672979412195\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230182,\n \ \ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230182\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.5756302521008403,\n \"acc_stderr\": 0.032104790510157764,\n\ \ \"acc_norm\": 0.5756302521008403,\n \"acc_norm_stderr\": 0.032104790510157764\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\ acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.7339449541284404,\n \"acc_stderr\": 0.018946022322225597,\n \"\ acc_norm\": 0.7339449541284404,\n \"acc_norm_stderr\": 0.018946022322225597\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538271,\n \"\ acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538271\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591361,\n \"\ acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591361\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7552742616033755,\n \"acc_stderr\": 0.02798569938703643,\n \ \ \"acc_norm\": 0.7552742616033755,\n \"acc_norm_stderr\": 0.02798569938703643\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n\ \ \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n\ \ \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n\ \ \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"\ acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n\ \ \"acc_stderr\": 0.044531975073749834,\n \"acc_norm\": 0.6944444444444444,\n\ \ \"acc_norm_stderr\": 0.044531975073749834\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.6809815950920245,\n \"acc_stderr\": 0.03661997551073836,\n\ \ \"acc_norm\": 0.6809815950920245,\n \"acc_norm_stderr\": 0.03661997551073836\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2857142857142857,\n\ \ \"acc_stderr\": 0.04287858751340456,\n \"acc_norm\": 0.2857142857142857,\n\ \ \"acc_norm_stderr\": 0.04287858751340456\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n\ \ \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8076923076923077,\n\ \ \"acc_stderr\": 0.02581923325648372,\n \"acc_norm\": 0.8076923076923077,\n\ \ \"acc_norm_stderr\": 0.02581923325648372\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \ \ \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7662835249042146,\n\ \ \"acc_stderr\": 0.015133383278988827,\n \"acc_norm\": 0.7662835249042146,\n\ \ \"acc_norm_stderr\": 0.015133383278988827\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.02541600377316555,\n\ \ \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.02541600377316555\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4480446927374302,\n\ \ \"acc_stderr\": 0.016631976628930595,\n \"acc_norm\": 0.4480446927374302,\n\ \ \"acc_norm_stderr\": 0.016631976628930595\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.6372549019607843,\n \"acc_stderr\": 0.02753007844711031,\n\ \ \"acc_norm\": 0.6372549019607843,\n \"acc_norm_stderr\": 0.02753007844711031\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6495176848874598,\n\ \ \"acc_stderr\": 0.027098652621301754,\n \"acc_norm\": 0.6495176848874598,\n\ \ \"acc_norm_stderr\": 0.027098652621301754\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.6327160493827161,\n \"acc_stderr\": 0.026822801759507894,\n\ \ \"acc_norm\": 0.6327160493827161,\n \"acc_norm_stderr\": 0.026822801759507894\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.41843971631205673,\n \"acc_stderr\": 0.02942799403941999,\n \ \ \"acc_norm\": 0.41843971631205673,\n \"acc_norm_stderr\": 0.02942799403941999\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4074315514993481,\n\ \ \"acc_stderr\": 0.012549473714212226,\n \"acc_norm\": 0.4074315514993481,\n\ \ \"acc_norm_stderr\": 0.012549473714212226\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.5367647058823529,\n \"acc_stderr\": 0.030290619180485687,\n\ \ \"acc_norm\": 0.5367647058823529,\n \"acc_norm_stderr\": 0.030290619180485687\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.5473856209150327,\n \"acc_stderr\": 0.020136790918492527,\n \ \ \"acc_norm\": 0.5473856209150327,\n \"acc_norm_stderr\": 0.020136790918492527\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n\ \ \"acc_stderr\": 0.04673752333670239,\n \"acc_norm\": 0.6090909090909091,\n\ \ \"acc_norm_stderr\": 0.04673752333670239\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.6489795918367347,\n \"acc_stderr\": 0.03055531675557364,\n\ \ \"acc_norm\": 0.6489795918367347,\n \"acc_norm_stderr\": 0.03055531675557364\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7512437810945274,\n\ \ \"acc_stderr\": 0.030567675938916714,\n \"acc_norm\": 0.7512437810945274,\n\ \ \"acc_norm_stderr\": 0.030567675938916714\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \ \ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4036144578313253,\n\ \ \"acc_stderr\": 0.038194861407583984,\n \"acc_norm\": 0.4036144578313253,\n\ \ \"acc_norm_stderr\": 0.038194861407583984\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.031581495393387324,\n\ \ \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.031581495393387324\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3011015911872705,\n\ \ \"mc1_stderr\": 0.016058999026100612,\n \"mc2\": 0.4313573698064727,\n\ \ \"mc2_stderr\": 0.014673057614679777\n }\n}\n```" repo_url: https://huggingface.co/CHIH-HUNG/llama-2-13b-open_orca_20w leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|arc:challenge|25_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hellaswag|10_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-management|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-international_law|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-management|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-marketing|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-sociology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-virology|5_2023-08-30T09:16:35.193244.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-international_law|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-management|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-marketing|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-sociology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-virology|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2023-08-30T09:16:35.193244.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2023_08_30T09_16_35.193244 path: - '**/details_harness|truthfulqa:mc|0_2023-08-30T09:16:35.193244.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2023-08-30T09:16:35.193244.parquet' - config_name: results data_files: - split: 2023_08_30T09_16_35.193244 path: - results_2023-08-30T09:16:35.193244.parquet - split: latest path: - results_2023-08-30T09:16:35.193244.parquet --- # Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-open_orca_20w ## Dataset Description - **Homepage:** - **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-open_orca_20w - **Paper:** - **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard - **Point of Contact:** clementine@hf.co ### Dataset Summary Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-open_orca_20w](https://huggingface.co/CHIH-HUNG/llama-2-13b-open_orca_20w) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-open_orca_20w", "harness_truthfulqa_mc_0", split="train") ``` ## Latest results These are the [latest results from run 2023-08-30T09:16:35.193244](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-open_orca_20w/blob/main/results_2023-08-30T09%3A16%3A35.193244.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5638417257556474, "acc_stderr": 0.03429434698845683, "acc_norm": 0.5680270426638301, "acc_norm_stderr": 0.03427324810755194, "mc1": 0.3011015911872705, "mc1_stderr": 0.016058999026100612, "mc2": 0.4313573698064727, "mc2_stderr": 0.014673057614679777 }, "harness|arc:challenge|25": { "acc": 0.5588737201365188, "acc_stderr": 0.014509747749064664, "acc_norm": 0.5989761092150171, "acc_norm_stderr": 0.014322255790719869 }, "harness|hellaswag|10": { "acc": 0.6183031268671579, "acc_stderr": 0.004848099661619698, "acc_norm": 0.8251344353714399, "acc_norm_stderr": 0.0037907576465759014 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.047258156262526066, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526066 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4666666666666667, "acc_stderr": 0.043097329010363554, "acc_norm": 0.4666666666666667, "acc_norm_stderr": 0.043097329010363554 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5723684210526315, "acc_stderr": 0.04026097083296564, "acc_norm": 0.5723684210526315, "acc_norm_stderr": 0.04026097083296564 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6377358490566037, "acc_stderr": 0.029582245128384303, "acc_norm": 0.6377358490566037, "acc_norm_stderr": 0.029582245128384303 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5694444444444444, "acc_stderr": 0.04140685639111502, "acc_norm": 0.5694444444444444, "acc_norm_stderr": 0.04140685639111502 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.45, "acc_stderr": 0.049999999999999996, "acc_norm": 0.45, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5144508670520231, "acc_stderr": 0.03810871630454764, "acc_norm": 0.5144508670520231, "acc_norm_stderr": 0.03810871630454764 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.28431372549019607, "acc_stderr": 0.04488482852329017, "acc_norm": 0.28431372549019607, "acc_norm_stderr": 0.04488482852329017 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4425531914893617, "acc_stderr": 0.03246956919789958, "acc_norm": 0.4425531914893617, "acc_norm_stderr": 0.03246956919789958 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.21929824561403508, "acc_stderr": 0.03892431106518754, "acc_norm": 0.21929824561403508, "acc_norm_stderr": 0.03892431106518754 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5310344827586206, "acc_stderr": 0.04158632762097828, "acc_norm": 0.5310344827586206, "acc_norm_stderr": 0.04158632762097828 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3492063492063492, "acc_stderr": 0.024552292209342665, "acc_norm": 0.3492063492063492, "acc_norm_stderr": 0.024552292209342665 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4126984126984127, "acc_stderr": 0.04403438954768176, "acc_norm": 0.4126984126984127, "acc_norm_stderr": 0.04403438954768176 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6516129032258065, "acc_stderr": 0.027104826328100944, "acc_norm": 0.6516129032258065, "acc_norm_stderr": 0.027104826328100944 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.46798029556650245, "acc_stderr": 0.035107665979592154, "acc_norm": 0.46798029556650245, "acc_norm_stderr": 0.035107665979592154 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6787878787878788, "acc_stderr": 0.036462049632538115, "acc_norm": 0.6787878787878788, "acc_norm_stderr": 0.036462049632538115 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7121212121212122, "acc_stderr": 0.03225883512300992, "acc_norm": 0.7121212121212122, "acc_norm_stderr": 0.03225883512300992 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8341968911917098, "acc_stderr": 0.026839845022314415, "acc_norm": 0.8341968911917098, "acc_norm_stderr": 0.026839845022314415 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5025641025641026, "acc_stderr": 0.025350672979412195, "acc_norm": 0.5025641025641026, "acc_norm_stderr": 0.025350672979412195 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2777777777777778, "acc_stderr": 0.027309140588230182, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.027309140588230182 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5756302521008403, "acc_stderr": 0.032104790510157764, "acc_norm": 0.5756302521008403, "acc_norm_stderr": 0.032104790510157764 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7339449541284404, "acc_stderr": 0.018946022322225597, "acc_norm": 0.7339449541284404, "acc_norm_stderr": 0.018946022322225597 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.46296296296296297, "acc_stderr": 0.03400603625538271, "acc_norm": 0.46296296296296297, "acc_norm_stderr": 0.03400603625538271 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7598039215686274, "acc_stderr": 0.02998373305591361, "acc_norm": 0.7598039215686274, "acc_norm_stderr": 0.02998373305591361 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7552742616033755, "acc_stderr": 0.02798569938703643, "acc_norm": 0.7552742616033755, "acc_norm_stderr": 0.02798569938703643 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6457399103139013, "acc_stderr": 0.032100621541349864, "acc_norm": 0.6457399103139013, "acc_norm_stderr": 0.032100621541349864 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6412213740458015, "acc_stderr": 0.04206739313864908, "acc_norm": 0.6412213740458015, "acc_norm_stderr": 0.04206739313864908 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7272727272727273, "acc_stderr": 0.04065578140908705, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.04065578140908705 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6944444444444444, "acc_stderr": 0.044531975073749834, "acc_norm": 0.6944444444444444, "acc_norm_stderr": 0.044531975073749834 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6809815950920245, "acc_stderr": 0.03661997551073836, "acc_norm": 0.6809815950920245, "acc_norm_stderr": 0.03661997551073836 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.2857142857142857, "acc_stderr": 0.04287858751340456, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.04287858751340456 }, "harness|hendrycksTest-management|5": { "acc": 0.7378640776699029, "acc_stderr": 0.04354631077260595, "acc_norm": 0.7378640776699029, "acc_norm_stderr": 0.04354631077260595 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8076923076923077, "acc_stderr": 0.02581923325648372, "acc_norm": 0.8076923076923077, "acc_norm_stderr": 0.02581923325648372 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7662835249042146, "acc_stderr": 0.015133383278988827, "acc_norm": 0.7662835249042146, "acc_norm_stderr": 0.015133383278988827 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6647398843930635, "acc_stderr": 0.02541600377316555, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.02541600377316555 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4480446927374302, "acc_stderr": 0.016631976628930595, "acc_norm": 0.4480446927374302, "acc_norm_stderr": 0.016631976628930595 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6372549019607843, "acc_stderr": 0.02753007844711031, "acc_norm": 0.6372549019607843, "acc_norm_stderr": 0.02753007844711031 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6495176848874598, "acc_stderr": 0.027098652621301754, "acc_norm": 0.6495176848874598, "acc_norm_stderr": 0.027098652621301754 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6327160493827161, "acc_stderr": 0.026822801759507894, "acc_norm": 0.6327160493827161, "acc_norm_stderr": 0.026822801759507894 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.41843971631205673, "acc_stderr": 0.02942799403941999, "acc_norm": 0.41843971631205673, "acc_norm_stderr": 0.02942799403941999 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4074315514993481, "acc_stderr": 0.012549473714212226, "acc_norm": 0.4074315514993481, "acc_norm_stderr": 0.012549473714212226 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5367647058823529, "acc_stderr": 0.030290619180485687, "acc_norm": 0.5367647058823529, "acc_norm_stderr": 0.030290619180485687 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5473856209150327, "acc_stderr": 0.020136790918492527, "acc_norm": 0.5473856209150327, "acc_norm_stderr": 0.020136790918492527 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.04673752333670239, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.04673752333670239 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6489795918367347, "acc_stderr": 0.03055531675557364, "acc_norm": 0.6489795918367347, "acc_norm_stderr": 0.03055531675557364 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7512437810945274, "acc_stderr": 0.030567675938916714, "acc_norm": 0.7512437810945274, "acc_norm_stderr": 0.030567675938916714 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.82, "acc_stderr": 0.038612291966536934, "acc_norm": 0.82, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-virology|5": { "acc": 0.4036144578313253, "acc_stderr": 0.038194861407583984, "acc_norm": 0.4036144578313253, "acc_norm_stderr": 0.038194861407583984 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.783625730994152, "acc_stderr": 0.031581495393387324, "acc_norm": 0.783625730994152, "acc_norm_stderr": 0.031581495393387324 }, "harness|truthfulqa:mc|0": { "mc1": 0.3011015911872705, "mc1_stderr": 0.016058999026100612, "mc2": 0.4313573698064727, "mc2_stderr": 0.014673057614679777 } } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
argilla/banking_sentiment_setfit
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': negative '1': neutral splits: - name: train num_bytes: 7433.25 num_examples: 108 - name: test num_bytes: 2477.75 num_examples: 36 download_size: 8087 dataset_size: 9911.0 --- # Dataset Card for "banking_sentiment_setfit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
autoevaluate/autoeval-staging-eval-project-f0d30a26-9815307
--- type: predictions tags: - autotrain - evaluation datasets: - conll2003 eval_info: task: entity_extraction model: AlexanderPeter/bert-finetuned-ner metrics: ['bleu'] dataset_name: conll2003 dataset_config: conll2003 dataset_split: test col_mapping: tokens: tokens tags: ner_tags --- # Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: AlexanderPeter/bert-finetuned-ner * Dataset: conll2003 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@test](https://huggingface.co/test) for evaluating this model.
ai-forever/spellcheck_benchmark
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - ru license: mit multilinguality: - monolingual size_categories: - 10K<n<20k task_categories: - text-generation pretty_name: Russian Spellcheck Benchmark language_bcp47: - ru-RU tags: - spellcheck - russian --- # Dataset Card for Russian Spellcheck Benchmark ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [SAGE](https://github.com/ai-forever/sage) - **Paper:** [arXiv:2308.09435](https://arxiv.org/abs/2308.09435) - **Point of Contact:** nikita.martynov.98@list.ru ### Dataset Summary Spellcheck Benchmark includes four datasets, each of which consists of pairs of sentences in Russian language. Each pair embodies sentence, which may contain spelling errors, and its corresponding correction. Datasets were gathered from various sources and domains including social networks, internet blogs, github commits, medical anamnesis, literature, news, reviews and more. All datasets were passed through two-stage manual labeling pipeline. The correction of a sentence is defined by an agreement of at least two human annotators. Manual labeling scheme accounts for jargonisms, collocations and common language, hence in some cases it encourages annotators not to amend a word in favor of preserving style of a text. ### Supported Tasks and Leaderboards - **Task:** automatic spelling correction. - **Metrics:** https://www.dialog-21.ru/media/3427/sorokinaaetal.pdf. ### Languages Russian. ## Dataset Structure ### Data Instances #### RUSpellRU - **Size of downloaded dataset files:** 3.64 Mb - **Size of the generated dataset:** 1.29 Mb - **Total amount of disk used:** 4.93 Mb An example of "train" / "test" looks as follows ``` { "source": "очень классная тетка ктобы что не говорил.", "correction": "очень классная тетка кто бы что ни говорил", } ``` #### MultidomainGold - **Size of downloaded dataset files:** 15.05 Mb - **Size of the generated dataset:** 5.43 Mb - **Total amount of disk used:** 20.48 Mb An example of "test" looks as follows ``` { "source": "Ну что могу сказать... Я заказала 2 вязанных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока одевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень тоской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.", "correction": "Ну что могу сказать... Я заказала 2 вязаных платья: за 1000 руб (у др продавца) и это ща 1200. Это платье- голимая синтетика (в том платье в составе была шерсть). Это платье как очень плохая резинка. На свои параметры (83-60-85) я заказала С . Пока надевала/снимала - оно в горловине растянулось. Помимо этого в этом платье я выгляжу ну очень доской. У меня вес 43 кг на 165 см роста. Кстати, продавец отправлял платье очень долго. Я пыталась отказаться от заказа, но он постоянно отклонял мой запрос. В общем не советую.", "domain": "reviews", } ``` #### MedSpellcheck - **Size of downloaded dataset files:** 1.49 Mb - **Size of the generated dataset:** 0.54 Mb - **Total amount of disk used:** 2.03 Mb An example of "test" looks as follows ``` { "source": "Кровотечения, поерации в анамнезе отрицает", "correction": "Кровотечения, операции в анамнезе отрицает", } ``` #### GitHubTypoCorpusRu - **Size of downloaded dataset files:** 1.23 Mb - **Size of the generated dataset:** 0.48 Mb - **Total amount of disk used:** 1.71 Mb An example of "test" looks as follows ``` { "source": "## Запросы и ответа содержат заголовки", "correction": "## Запросы и ответы содержат заголовки", } ``` ### Data Fields #### RUSpellRU - `source`: a `string` feature - `correction`: a `string` feature - `domain`: a `string` feature #### MultidomainGold - `source`: a `string` feature - `correction`: a `string` feature - `domain`: a `string` feature #### MedSpellcheck - `source`: a `string` feature - `correction`: a `string` feature - `domain`: a `string` feature #### GitHubTypoCorpusRu - `source`: a `string` feature - `correction`: a `string` feature - `domain`: a `string` feature ### Data Splits #### RUSpellRU | |train|test| |---|---:|---:| |RUSpellRU|2000|2008| #### MultidomainGold | |train|test| |---|---:|---:| |web|386|756| |news|361|245| |social_media|430|200| |reviews|584|586| |subtitles|1810|1810| |strategic_documents|-|250| |literature|-|260| #### MedSpellcheck | |test| |---|---:| |MedSpellcheck|1054| #### GitHubTypoCorpusRu | |test| |---|---:| |GitHubTypoCorpusRu|868| ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The datasets are chosen in accordance with the specified criteria. First, domain variation: half of the datasets are chosen from different domains to ensure diversity, while the remaining half are from a single domain. Another criterion is spelling orthographic mistakes: the datasets exclusively comprised mistyping, omitting grammatical or more complex errors of nonnative speakers. - **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors; - **MultidomainGold**: examples from several text sources including the open web, news, social media, reviews, subtitles, policy documents and literary works were collected: *Aranea web-corpus* is a family of multilanguage gigaword web-corpora collected from Internet resources. The texts in the corpora are evenly distributed across periods, writing styles and topics they cover. We randomly picked the sentences from Araneum Russicum, which is harvested from the Russian part of the web. *Literature* is a collection of Russian poems and prose of different classical literary works. We randomly picked sentences from the source dataset that were gathered from Ilibrary, LitLib, and Wikisource. *News*, as the name suggests, covers news articles on various topics such as sports, politics, environment, economy etc. The passages are randomly picked from the summarization dataset Gazeta.ru. *Social media* is the text domain from social media platforms marked with specific hashtags. These texts are typically short, written in an informal style and may contain slang, emojis and obscene lexis. *Strategic Documents* is part of the dataset the Ministry of Economic Development of the Russian Federation collected. Texts are written in a bureaucratic manner, rich in embedded entities, and have complex syntactic and discourse structures. The full version of the dataset has been previously used in the RuREBus shared task. - **MedSpellChecker**: texts with errors from medical anamnesis; - **GitHubTypoCorpusRu**: spelling errors and typos in commits from [GitHub](https://github.com); ### Annotations #### Annotation process We set up two-stage annotation project via a crowd-sourcing platform Toloka: 1. Data gathering stage: we provide the texts with possible mistakes to annotators and ask them to write the sentence correctly; 2. Validation stage: we provide annotators with the pair of sentences (source and its corresponding correction from the previous stage) and ask them to check if the correction is right. We prepared instructions for annotators for each task. The instructions ask annotators to correct misspellings if it does not alter the original style of the text. Instructions do not provide rigorous criteria on the matter of distinguishing the nature of an error in terms of its origin - whether it came from an urge to endow a sentence with particular stylistic features or from unintentional spelling violation since it is time-consuming and laborious to describe every possible case of employing slang, dialect, collo- quialisms, etc. instead of proper language. Instructions also do not distinguish errors that come from the geographical or social background of the source. Instead, we rely on annotators’ knowledge and understanding of a language since, in this work, the important factor is to preserve the original style of the text. To ensure we receive qualified expertise, we set up test iteration on a small subset of the data for both stages. We manually validated the test results and selected annotators, who processed at least six samples (2% of the total test iteration) and did not make a single error. After test iteration, we cut 85% and 86% of labellers for gathering and validation stages. We especially urge annotators to correct mistakes associated with the substitution of the letters "ё" "й" and "щ" for corresponding "е" "и" and "ш" and not to explain abbreviations and correct punctuation errors. Each annotator is also warned about potentially sensitive topics in data (e.g., politics, societal minorities, and religion). #### Who are the annotators? Native Russian speakers who passed the language exam. ## Considerations for Using the Data ### Discussion of Biases We clearly state our work’s aims and implications, making it open source and transparent. The data will be available under a public license. As our research involved anonymized textual data, informed consent from human participants was not required. However, we obtained permission to access publicly available datasets and ensured compliance with any applicable terms of service or usage policies. ### Other Known Limitations The data used in our research may be limited to specific domains, preventing comprehensive coverage of all possible text variations. Despite these limitations, we tried to address the issue of data diversity by incorporating single-domain and multi-domain datasets in the proposed research. This approach allowed us to shed light on the diversity and variances within the data, providing valuable insights despite the inherent constraints. We primarily focus on the Russian language. Further research is needed to expand the datasets for a wider range of languages. ## Additional Information ### Future plans We are planning to expand our benchmark with both new Russian datasets and datasets in other languages including (but not limited to) European and CIS languages. If you would like to contribute, please contact us. ### Dataset Curators Nikita Martynov nikita.martynov.98@list.ru ### Licensing Information All our datasets are published by MIT License. ### Citation Information ``` @inproceedings{martynov2023augmentation, title={Augmentation methods for spelling corruptions}, author={Martynov, Nikita and Baushenko, Mark and Abramov, Alexander and Fenogenova, Alena}, booktitle={Proceedings of the International Conference “Dialogue}, volume={2023}, year={2023} } @misc{martynov2023methodology, title={A Methodology for Generative Spelling Correction via Natural Spelling Errors Emulation across Multiple Domains and Languages}, author={Nikita Martynov and Mark Baushenko and Anastasia Kozlova and Katerina Kolomeytseva and Aleksandr Abramov and Alena Fenogenova}, year={2023}, eprint={2308.09435}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AlvianKhairi/my-pandas-dataset-AbstractAndLink
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 113697146 num_examples: 276033 download_size: 48418586 dataset_size: 113697146 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "my-pandas-dataset-AbstractAndLink" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Cohere/wikipedia-22-12-hi-embeddings
--- annotations_creators: - expert-generated language: - hi multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # Wikipedia (hi) embedded with cohere.ai `multilingual-22-12` encoder We encoded [Wikipedia (hi)](https://hi.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Further languages We provide embeddings of Wikipedia in many different languages: [ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings), You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Loading the dataset You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True) for doc in docs: docid = doc['id'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search A full search example: ```python #Run: pip install cohere datasets from datasets import load_dataset import torch import cohere co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com #Load at max 1000 documents + embeddings max_docs = 1000 docs_stream = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc['emb']) if len(docs) >= max_docs: break doc_embeddings = torch.tensor(doc_embeddings) query = 'Who founded Youtube' response = co.embed(texts=[query], model='multilingual-22-12') query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text'], "\n") ``` ## Performance You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance)
CyberHarem/naganami_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of naganami/長波/长波 (Azur Lane) This is the dataset of naganami/長波/长波 (Azur Lane), containing 30 images and their tags. The core tags of this character are `long_hair, black_hair, animal_ears, breasts, tail, yellow_eyes, large_breasts, hair_ornament, bangs, multicolored_hair, hair_flower, animal_ear_fluff, white_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 30 | 39.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/naganami_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 30 | 24.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/naganami_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 74 | 49.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/naganami_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 30 | 36.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/naganami_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 74 | 63.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/naganami_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/naganami_azurlane', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, looking_at_viewer, solo, simple_background, flower, off_shoulder, blush, smile, ass, long_sleeves, white_background, looking_back, open_mouth, white_kimono, bow, no_panties, streaked_hair, wide_sleeves, black_skirt, floral_print, jingle_bell | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, looking_at_viewer, solo, feet, flower, official_alternate_costume, black_headwear, lying, no_shoes, pantyhose, toes, bare_shoulders, blush, legs, on_bed, pillow, see-through, simple_background, soles, top_hat, twin_braids, two-tone_dress, very_long_hair, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | looking_at_viewer | solo | simple_background | flower | off_shoulder | blush | smile | ass | long_sleeves | white_background | looking_back | open_mouth | white_kimono | bow | no_panties | streaked_hair | wide_sleeves | black_skirt | floral_print | jingle_bell | feet | official_alternate_costume | black_headwear | lying | no_shoes | pantyhose | toes | legs | on_bed | pillow | see-through | soles | top_hat | twin_braids | two-tone_dress | very_long_hair | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------------|:-------|:--------------------|:---------|:---------------|:--------|:--------|:------|:---------------|:-------------------|:---------------|:-------------|:---------------|:------|:-------------|:----------------|:---------------|:--------------|:---------------|:--------------|:-------|:-----------------------------|:-----------------|:--------|:-----------|:------------|:-------|:-------|:---------|:---------|:--------------|:--------|:----------|:--------------|:-----------------|:-----------------| | 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | | X | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
lorinma/EvolInstruct_zh_DeepseekAPI
--- license: mit language: - zh --- 和之前的Evol-Instruction尝试对比(https://huggingface.co/datasets/lorinma/Chinese_Evol_Instruct_3.5),使用了中文prompt。 因为OpenAI接口太贵,使用了DeepSeek赠送的1000万token。这次生成了一万条基本用完了。 一共有3个文件: combined_seed_correct.json 是使用的基础种子任务371条,alpaca格式。使用了 Belle的中文种子任务175条。并且参照了 4 增加了ShareGPT的数据以更接近真实世界的用法,掺入了 Wildchat-zh抽样196条 ,多轮对话只采用第一个有意义的问答对。 evolve_chinese.py 基于H2O EvolInstruction的代码。 0227_evol_combinedseedcorrect.json 生成的1.2万条数据。
CyberHarem/wayu_fireemblem
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of wayu/ワユ (Fire Emblem) This is the dataset of wayu/ワユ (Fire Emblem), containing 500 images and their tags. The core tags of this character are `long_hair, green_eyes, blue_hair, hairband, white_hairband, breasts, ahoge, purple_hair, headband`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 500 | 620.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wayu_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 500 | 370.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wayu_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 1194 | 781.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wayu_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 500 | 558.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wayu_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 1194 | 1.05 GiB | [Download](https://huggingface.co/datasets/CyberHarem/wayu_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/wayu_fireemblem', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, fingerless_gloves, holding_sword, looking_at_viewer, smile, solo, detached_sleeves, armor, belt, simple_background, thighhighs, closed_mouth | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, smile, solo, fingerless_gloves, sword, simple_background, thighhighs, belt, detached_sleeves, bare_shoulders, white_background, open_mouth, zettai_ryouiki | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, detached_sleeves, fingerless_gloves, holding_sword, navel, official_alternate_costume, smile, solo, thighhighs, cape, looking_at_viewer, open_mouth, short_shorts, midriff, sheathed, white_shorts, bare_shoulders, belt, medium_breasts, simple_background, white_background, white_gloves | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, belt, halloween_costume, navel_cutout, solo, white_headband, witch_hat, detached_sleeves, fingerless_gloves, open_mouth, black_gloves, garter_straps, broom_riding, thighhighs | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, blue_sky, day, hair_flower, orange_bikini, solo, cloud, navel, cleavage, looking_at_viewer, grin, armpits, holding, large_breasts, open_mouth, thigh_strap, wristband | | 5 | 13 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | hair_flower, orange_bikini, 1girl, smile, solo, medium_breasts, simple_background, cleavage, looking_at_viewer, navel, white_background, open_mouth, wristband, official_alternate_costume, orange_flower, thigh_strap, holding | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, blush, hair_flower, hetero, orange_bikini, penis, sex, solo_focus, vaginal, nipples, open_mouth, 1boy, cum_in_pussy, spread_legs, thigh_strap, breasts_out, day, large_breasts, mosaic_censoring, navel, orange_flower, outdoors, clothes_lift, ejaculation, wristband, bangs, blue_sky, clothing_aside, cloud, stomach | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, cum_in_pussy, gangbang, hetero, multiple_penises, nipples, solo_focus, thighhighs, vaginal, 3boys, blush, ejaculation, spread_legs, clothed_female_nude_male, cum_on_breasts, large_breasts, open_mouth, breast_grab, breasts_out, clothed_sex, detached_sleeves, facial, fingerless_gloves, grabbing, testicles, bar_censor, belt, gloved_handjob, mosaic_censoring, rape | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1boy, 1girl, blush, hetero, nipples, open_mouth, penis, solo_focus, thighhighs, vaginal, breasts_out, clothed_female_nude_male, clothed_sex, fingerless_gloves, medium_breasts, mosaic_censoring, spread_legs, belt, cum_in_pussy, elbow_gloves, lying, black_gloves, detached_sleeves, large_breasts, tears | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, navel, nipples, smile, solo, pussy, blush, looking_at_viewer, medium_breasts, simple_background, bar_censor, completely_nude, large_breasts, white_background | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | fingerless_gloves | holding_sword | looking_at_viewer | smile | solo | detached_sleeves | armor | belt | simple_background | thighhighs | closed_mouth | sword | bare_shoulders | white_background | open_mouth | zettai_ryouiki | navel | official_alternate_costume | cape | short_shorts | midriff | sheathed | white_shorts | medium_breasts | white_gloves | halloween_costume | navel_cutout | white_headband | witch_hat | black_gloves | garter_straps | broom_riding | blue_sky | day | hair_flower | orange_bikini | cloud | cleavage | grin | armpits | holding | large_breasts | thigh_strap | wristband | orange_flower | blush | hetero | penis | sex | solo_focus | vaginal | nipples | 1boy | cum_in_pussy | spread_legs | breasts_out | mosaic_censoring | outdoors | clothes_lift | ejaculation | bangs | clothing_aside | stomach | gangbang | multiple_penises | 3boys | clothed_female_nude_male | cum_on_breasts | breast_grab | clothed_sex | facial | grabbing | testicles | bar_censor | gloved_handjob | rape | elbow_gloves | lying | tears | pussy | completely_nude | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:----------------|:--------------------|:--------|:-------|:-------------------|:--------|:-------|:--------------------|:-------------|:---------------|:--------|:-----------------|:-------------------|:-------------|:-----------------|:--------|:-----------------------------|:-------|:---------------|:----------|:-----------|:---------------|:-----------------|:---------------|:--------------------|:---------------|:-----------------|:------------|:---------------|:----------------|:---------------|:-----------|:------|:--------------|:----------------|:--------|:-----------|:-------|:----------|:----------|:----------------|:--------------|:------------|:----------------|:--------|:---------|:--------|:------|:-------------|:----------|:----------|:-------|:---------------|:--------------|:--------------|:-------------------|:-----------|:---------------|:--------------|:--------|:-----------------|:----------|:-----------|:-------------------|:--------|:---------------------------|:-----------------|:--------------|:--------------|:---------|:-----------|:------------|:-------------|:-----------------|:-------|:---------------|:--------|:--------|:--------|:------------------| | 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 13 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | X | X | X | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | X | X | | X | X | X | | | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | | | | X | X | | X | | X | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 8 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | X | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 13 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | | | X | X | X | | | | X | | | | | X | X | | X | X | | | | | | X | | | | | | | | | | | X | X | | X | | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 9 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | 7 | 7 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | | | | X | | X | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | | X | X | X | | X | X | X | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | 8 | 6 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | | | | X | | X | | X | | | | | X | | | | | | | | | X | | | | | | X | | | | | | | | | | | | X | | | | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | X | | | X | | | | | | | X | X | X | | | | 9 | 6 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | X | X | X | | | | X | | | | | X | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | X | X |
liuyanchen1015/MULTI_VALUE_sst2_definite_abstract
--- dataset_info: features: - name: sentence dtype: string - name: label dtype: int64 - name: idx dtype: int64 - name: score dtype: int64 splits: - name: dev num_bytes: 29304 num_examples: 184 - name: test num_bytes: 53776 num_examples: 352 - name: train num_bytes: 893536 num_examples: 8188 download_size: 552568 dataset_size: 976616 --- # Dataset Card for "MULTI_VALUE_sst2_definite_abstract" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
google/cvss
--- license: cc-by-4.0 language: - en - ar - ca - cy - de - es - et - fa - fr - id - it - ja - lv - mn - nl - pt - ru - sl - sv - ta - tr - zh --- # CVSS: A Massively Multilingual Speech-to-Speech Translation Corpus *CVSS* is a massively multilingual-to-English speech-to-speech translation corpus, covering sentence-level parallel speech-to-speech translation pairs from 21 languages into English. CVSS is derived from the [Common Voice](https://commonvoice.mozilla.org/) speech corpus and the [CoVoST 2](https://github.com/facebookresearch/covost) speech-to-text translation corpus. The translation speech in CVSS is synthesized with two state-of-the-art TTS models trained on the [LibriTTS](http://www.openslr.org/60/) corpus. CVSS includes two versions of spoken translation for all the 21 x-en language pairs from CoVoST 2, with each version providing unique values: - *CVSS-C*: All the translation speeches are in a single canonical speaker's voice. Despite being synthetic, these speeches are of very high naturalness and cleanness, as well as having a consistent speaking style. These properties ease the modeling of the target speech and enable models to produce high quality translation speech suitable for user-facing applications. - *CVSS-T*: The translation speeches are in voices transferred from the corresponding source speeches. Each translation pair has similar voices on the two sides despite being in different languages, making this dataset suitable for building models that preserve speakers' voices when translating speech into different languages. Together with the source speeches originated from Common Voice, they make two multilingual speech-to-speech translation datasets each with about 1,900 hours of speech. In addition to translation speech, CVSS also provides normalized translation text matching the pronunciation in the translation speech (e.g. on numbers, currencies, acronyms, etc.), which can be used for both model training as well as standardizing evaluation. Please check out [our paper](https://arxiv.org/abs/2201.03713) for the detailed description of this corpus, as well as the baseline models we trained on both datasets. # Load the data The following example loads the translation speech (i.e. target speech) and the normalized translation text (i.e. target text) released in CVSS corpus. You'll need to load the source speech and optionally the source text from [Common Voice v4.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_4_0) separately, and join them by the file names. ```py from datasets import load_dataset # Load only ar-en and ja-en language pairs. Omitting the `languages` argument # would load all the language pairs. cvss_c = load_dataset('google/cvss', 'cvss_c', languages=['ar', 'ja']) # Print the structure of the dataset. print(cvss_c) ``` # License CVSS is released under the very permissive [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license. ## Citation Please cite this paper when referencing the CVSS corpus: ``` @inproceedings{jia2022cvss, title={{CVSS} Corpus and Massively Multilingual Speech-to-Speech Translation}, author={Jia, Ye and Tadmor Ramanovich, Michelle and Wang, Quan and Zen, Heiga}, booktitle={Proceedings of Language Resources and Evaluation Conference (LREC)}, pages={6691--6703}, year={2022} } ```
Blazej/banking_alignment_preference_ds
--- dataset_info: features: - name: guideline dtype: string - name: query dtype: string - name: chosen dtype: string - name: rejected dtype: string splits: - name: train num_bytes: 2286243 num_examples: 1200 download_size: 1112482 dataset_size: 2286243 configs: - config_name: default data_files: - split: train path: data/train-* ---
helojo/wav2vec2-large-mms-1b-zh-colab
--- license: openrail ---
hao7Chen/test_dataset
--- license: mit ---
om-ashish-soni/vivechan-spritual-text-dataset-v2
--- language: - en license: apache-2.0 size_categories: - 10K<n<100K task_categories: - text-retrieval - text2text-generation - text-to-speech dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 10591768 num_examples: 38029 download_size: 5326915 dataset_size: 10591768 configs: - config_name: default data_files: - split: train path: data/train-* --- # Vivechan - Spiritual Text Dataset ## Description The Vivechan - Spiritual Text Dataset is an open and public collection of textual data extracted from significant spiritual texts, curated to support discussions, inquiries, doubts, and Q&A sessions within the realm of spirituality. This dataset provides valuable content from the following revered sources: - Shrimad Bhagwat Mahapurana - Shripad Shri Vallabha Charitramrutam - Shiv Mahapurana Sankshipt - Valmiki Ramayan ## Dataset Information - **Features**: - **text**: Each example consists of a string containing textual excerpts from the mentioned sources. - **Splits**: - **Train**: 27,954 examples - **Download Size**: 3,565,541 bytes - **Dataset Size**: 7,659,570 bytes ## Task Categories The dataset is designed to facilitate the following tasks: - **Text Retrieval**: Retrieve relevant passages based on user queries or specified topics. - **Text-to-Text Generation**: Generate responses or elaborate on queries based on input text. - **Text-to-Speech**: Convert textual data into speech for auditory presentation. ## Usage This dataset, Vivechan - Spiritual Text Dataset, is openly available and can be utilized to train or fine-tune Language Models (LLMs), existing AI models, or develop new models for various applications within the realm of spirituality and spiritual texts. ## Language The dataset is available in English (en). ## Size Categories The dataset falls within the size category of 10K < n < 100K, making it suitable for training or fine-tuning LLMs and other AI models. ## License This dataset is released under the Apache License 2.0, enabling open usage, modification, and distribution. ## Citation If you use this dataset in your work, please cite it as: [Insert citation details here] ## Acknowledgements We express our gratitude to the original sources of the texts included in this dataset: - Shrimad Bhagwat Mahapurana - Shripad Shri Vallabha Charitramrutam - Shiv Mahapurana Sankshipt - Valmiki Ramayan
jjovalle99/raft-dataset-aws-wellarchitected
--- dataset_info: features: - name: id dtype: string - name: type dtype: string - name: question dtype: string - name: context struct: - name: sentences sequence: sequence: string - name: title sequence: sequence: string - name: oracle_context dtype: string - name: cot_answer dtype: string - name: instruction dtype: string splits: - name: train num_bytes: 46698557 num_examples: 1671 download_size: 17713755 dataset_size: 46698557 configs: - config_name: default data_files: - split: train path: data/train-* ---
rodrigobrazao/rbDataset
--- license: openrail ---
Matias12f/cats_dogs_trabajo
--- license: apache-2.0 ---
senti_ws
--- annotations_creators: - expert-generated - machine-generated language_creators: - found language: - de license: - cc-by-sa-3.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification - text-classification task_ids: - text-scoring - sentiment-scoring - part-of-speech pretty_name: SentiWS dataset_info: - config_name: pos-tagging features: - name: word dtype: string - name: pos-tag dtype: class_label: names: '0': NN '1': VVINF '2': ADJX '3': ADV splits: - name: train num_bytes: 75530 num_examples: 3471 download_size: 97748 dataset_size: 75530 - config_name: sentiment-scoring features: - name: word dtype: string - name: sentiment-score dtype: float32 splits: - name: train num_bytes: 61646 num_examples: 3471 download_size: 97748 dataset_size: 61646 --- # Dataset Card for SentiWS ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://wortschatz.uni-leipzig.de/en/download - **Repository:** [Needs More Information] - **Paper:** http://www.lrec-conf.org/proceedings/lrec2010/pdf/490_Paper.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative polarity bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS contains around 1,650 positive and 1,800 negative words, which sum up to around 16,000 positive and 18,000 negative word forms incl. their inflections, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. ### Supported Tasks and Leaderboards Sentiment-Scoring, Pos-Tagging ### Languages German ## Dataset Structure ### Data Instances For pos-tagging: ``` { "word":"Abbau" "pos_tag": 0 } ``` For sentiment-scoring: ``` { "word":"Abbau" "sentiment-score":-0.058 } ``` ### Data Fields SentiWS is UTF8-encoded text. For pos-tagging: - word: one word as a string, - pos_tag: the part-of-speech tag of the word as an integer, For sentiment-scoring: - word: one word as a string, - sentiment-score: the sentiment score of the word as a float between -1 and 1, The POS tags are ["NN", "VVINF", "ADJX", "ADV"] -> ["noun", "verb", "adjective", "adverb"], and positive and negative polarity bearing words are weighted within the interval of [-1, 1]. ### Data Splits train: 1,650 negative and 1,818 positive words ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License ### Citation Information @INPROCEEDINGS{remquahey2010, title = {SentiWS -- a Publicly Available German-language Resource for Sentiment Analysis}, booktitle = {Proceedings of the 7th International Language Resources and Evaluation (LREC'10)}, author = {Remus, R. and Quasthoff, U. and Heyer, G.}, year = {2010} } ### Contributions Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
andersonbcdefg/minipile-simlm
--- dataset_info: features: - name: id dtype: int64 - name: contents dtype: string splits: - name: train num_bytes: 5914108510 num_examples: 1000000 download_size: 3150298950 dataset_size: 5914108510 configs: - config_name: default data_files: - split: train path: data/train-* ---
SEACrowd/bible_en_id
--- tags: - machine-translation language: - ind - eng --- # bible_en_id Bible En-Id is a machine translation dataset containing Indonesian-English parallel sentences collected from the bible. We also add a Bible dataset to the English Indonesian translation task. Specifically, we collect an Indonesian and an English language Bible and generate a verse-aligned parallel corpus for the English-Indonesian machine translation task. We split the dataset and use 75% as the training set, 10% as the validation set, and 15% as the test set. Each of the datasets is evaluated in both directions, i.e., English to Indonesian (En → Id) and Indonesian to English (Id → En) translations. ## Dataset Usage Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`. ## Citation ``` @inproceedings{cahyawijaya-etal-2021-indonlg, title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation", author = "Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu and Purwarianti, Ayu and Fung, Pascale", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.699", doi = "10.18653/v1/2021.emnlp-main.699", pages = "8875--8898", abstract = "Natural language generation (NLG) benchmarks provide an important avenue to measure progress and develop better NLG systems. Unfortunately, the lack of publicly available NLG benchmarks for low-resource languages poses a challenging barrier for building NLG systems that work well for languages with limited amounts of data. Here we introduce IndoNLG, the first benchmark to measure natural language generation (NLG) progress in three low-resource{---}yet widely spoken{---}languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems today. Concretely, IndoNLG covers six tasks: summarization, question answering, chit-chat, and three different pairs of machine translation (MT) tasks. We collate a clean pretraining corpus of Indonesian, Sundanese, and Javanese datasets, Indo4B-Plus, which is used to pretrain our models: IndoBART and IndoGPT. We show that IndoBART and IndoGPT achieve competitive performance on all tasks{---}despite using only one-fifth the parameters of a larger multilingual model, mBART-large (Liu et al., 2020). This finding emphasizes the importance of pretraining on closely related, localized languages to achieve more efficient learning and faster inference at very low-resource languages like Javanese and Sundanese.", } ``` ## License Creative Commons Attribution Share-Alike 4.0 International ## Homepage [https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg) ### NusaCatalogue For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue)
zhangshuoming/numeric_bench
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 579085.0 num_examples: 1936 download_size: 142464 dataset_size: 579085.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "numeric_bench" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
loubnabnl/jupyter_python_max_line_length_1000
--- dataset_info: features: - name: hexsha dtype: string - name: ext dtype: string - name: lang dtype: string - name: max_stars_repo_path dtype: string - name: max_stars_repo_name dtype: string - name: max_stars_repo_licenses dtype: string - name: content dtype: string - name: avg_line_length dtype: float64 - name: max_line_length dtype: int64 splits: - name: train num_bytes: 1809605.8191577208 num_examples: 174 download_size: 6042172 dataset_size: 1809605.8191577208 --- # Dataset Card for "jupyter_python_max_line_length_1000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pbaoo2705/covidqa_processed_eval
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: context_chunks sequence: string - name: document_id dtype: int64 - name: id dtype: int64 - name: context dtype: string - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: start_positions dtype: int64 - name: end_positions dtype: int64 splits: - name: train num_bytes: 2643073 num_examples: 50 download_size: 730327 dataset_size: 2643073 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "covidqa_processed_eval" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ovior/twitter_dataset_1713220990
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 2417759 num_examples: 7518 download_size: 1353101 dataset_size: 2417759 configs: - config_name: default data_files: - split: train path: data/train-* ---
LangChainDatasets/question-answering-state-of-the-union
--- license: mit ---
luispessoa/marketing
--- license: osl-3.0 ---
leegeon/lee
--- license: other license_name: leee license_link: LICENSE ---
etrent17/irs-articles
--- license: mit ---
benxh/opensyllabus-tagged-libgen
--- task_categories: - text-classification language: - en tags: - Scrape - Open Syllabus - Libgen --- # Open Syllabus - tagged by category via libgen ### Dataset Summary This dataset is a scrape of explore.opensyllabus.com book titles, authors, etc. tagged with finding the same titles on libgen's database. The ideal choice of using this dataset is for automated synthetic textbook generation. ## Considerations for Using the Data Do not use this dataset for anything illegal. This is meant as a reference point for further development of open source AI.
PJMixers/NobodyExistsOnTheInternet_full120k-filtered-ShareGPT
--- size_categories: - 10K<n<100K language: - en tags: - not-for-all-audiences --- Filtered with this python script: https://gist.github.com/xzuyn/b6d727a515987c58064d44dbad02690b ``` Amount Kept: 69827 Amount Removed: 50484 String which caused removal: - however: 8239 - shivers down: 7029 - consensual: 6480 - meanwhile: 3463 - wanton: 2694 - her sex: 1880 - wild abandon: 1284 - It's important to: 1264 - controversial: 1127 - slick folds: 1099 - in a rhythm: 1021 - respectful: 956 - keep in mind: 888 - ministrations: 858 - ethical: 769 - diversity: 727 - dance of pleasure: 692 - prioritize safety: 690 - once upon: 685 - it is important to: 535 - gpt: 440 - with reckless abandon: 433 - fiery red hair: 416 - sent shockwaves: 386 - comply: 335 - empowerment: 317 - ethically: 288 - biases: 282 - regulations: 260 - puckered hole: 237 - Please note: 232 - inappropriate: 218 - morally: 199 - torn between: 188 - lay ahead: 184 - ensure the safety: 171 - harmful: 152 - exhausted and spent: 150 - derogatory: 149 - diversity and: 146 - rivulets of: 132 - illegal: 125 - ethics: 112 - threatens to consume: 110 - bias: 106 - I cannot: 101 - her wet heat: 100 - breathless and eager: 97 - complying: 95 - language model: 94 - potentially harmful: 94 - unacceptable: 88 - inclusivity: 87 - not provide: 87 - morals: 67 - stereotypes: 66 - discriminate: 63 - lgbt: 54 - not be suitable: 52 - As a machine: 51 - unethical: 51 - nestled deep within: 50 - racial: 44 - my programming: 43 - grins wickedly: 42 - discrimination: 41 - potentially dangerous: 40 - worth noting: 37 - offensive: 32 - safe spaces: 31 - As an AI: 31 - I'm an: 28 - legality: 28 - take your pleasure: 28 - cause harm: 27 - purely hypothetical: 27 - real-world consequences: 25 - half-lidded eyes: 24 - openai: 22 - sensitive topic: 21 - an ethereal beauty: 21 - the choice is yours: 20 - I'm sorry,: 20 - our values: 19 - It is important for: 19 - transgender: 17 - entertainment purposes: 17 - dusky nipples: 15 - I am an: 15 - feminist: 15 - for what seemed like an eternity: 14 - knuckles turning white: 13 - follow ethical guidelines: 12 - glorify: 12 - like an electric shock: 11 - a bruising kiss: 11 - cheeks hollowing: 11 - certainly not: 10 - capitalism: 10 - prioritize ethical: 8 - life would never be the same again: 8 - racism: 8 - long lashes: 8 - the night is still young: 7 - dangerous activities: 6 - not acceptable: 6 - can't provide: 6 - ESG: 6 - admit it: 6 - my purpose: 6 - social responsibility: 5 - gender stereotype: 5 - communist: 5 - without waiting for a response: 5 - not appropriate: 5 - divisive: 5 - dangerous or harmful: 5 - warring with: 4 - important to remember that: 4 - the world narrows: 4 - promote safety: 4 - the ball is in your court: 4 - gender-based: 3 - chestnut eyes: 3 - the game is on: 3 - hate speech: 3 - harmful consequences: 3 - whispering words of passion: 2 - Ensuring the ethical: 2 - ethical principles: 2 - won't provide: 2 - extremist: 2 - It is not possible: 2 - not be appropriate: 2 - feminism: 2 - my guidelines: 2 - was soft and gentle: 2 - hateful: 2 - prioritize user well-being: 1 - inclusive workplace: 1 - a language model: 1 - hurtful: 1 - discriminatory: 1 - my main goal: 1 - an AI language: 1 - audible pop: 1 - bites your ear: 1 - kiss-bruised lips: 1 - AI assistant: 1 - jeopardize the safety: 1 - illegality: 1 - legal and ethical: 1 - sexism: 1 - gender inequality: 1 - propriety be damned: 1 - ...for now.: 1 - promote the well-being: 1 ```
AdapterOcean/med_alpaca_standardized_cluster_82
--- dataset_info: features: - name: text dtype: string - name: conversation_id dtype: int64 - name: embedding sequence: float64 - name: cluster dtype: int64 splits: - name: train num_bytes: 54588111 num_examples: 5596 download_size: 15770422 dataset_size: 54588111 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "med_alpaca_standardized_cluster_82" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davanstrien/arxiv_maybe_about_new_datasets
--- dataset_info: features: - name: id dtype: string - name: submitter dtype: string - name: authors dtype: string - name: title dtype: string - name: comments dtype: string - name: journal-ref dtype: string - name: doi dtype: string - name: report-no dtype: string - name: categories dtype: string - name: license dtype: string - name: abstract dtype: string - name: versions list: - name: version dtype: string - name: created dtype: string - name: update_date dtype: timestamp[s] - name: authors_parsed sequence: sequence: string - name: predictions dtype: string - name: probabilities dtype: float64 splits: - name: train num_bytes: 746174024 num_examples: 450403 download_size: 409173494 dataset_size: 746174024 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "arxiv_maybe_about_new_datasets" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ademax/ocr_handwritten_vi
--- dataset_info: features: - name: path dtype: string - name: text dtype: string - name: image dtype: image - name: meta struct: - name: path dtype: string - name: subset dtype: string splits: - name: train num_bytes: 103019407.98201361 num_examples: 1823 download_size: 373574299 dataset_size: 103019407.98201361 --- # Dataset Card for "ocr_handwritten_vi" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ashishxx/my_dataset_test
--- dataset_info: features: - name: audio dtype: audio splits: - name: train num_bytes: 4635972.0 num_examples: 5 download_size: 4634209 dataset_size: 4635972.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
fairnlp/weat
--- language: - en configs: - config_name: words data_files: - split: words path: words.parquet - config_name: associations data_files: - split: associations_weat path: associations_weat.parquet - config_name: associations_wefat data_files: - split: associations_wefat path: associations_wefat.parquet --- # Usage When downloading, specify which files you want to download and set the split to `train` (required by `datasets`). ```python from datasets import load_dataset words = load_dataset("fairnlp/weat", data_files=["words.parquet"], split="train") associations = load_dataset("fairnlp/weat", data_files=["associations_weat.parquet"], split="train") ``` # Dataset Card for Word Embedding Association Test (WEAT) This dataset contains the source words of the original Word Embedding Association Test (WEAT) as described [by Caliskan et. al. (2016)](https://arxiv.org/abs/1608.07187). ## Dataset Details The dataset contains word lists and attribute lists used to compute several WEAT scores for different embedding associations. For details on the methodology, please refer to the original paper. This dataset is contributed to Hugging Face as part of the WEAT implementation in the [FairNLP `fairscore` library](https://github.com/FairNLP/fairscore/). ### Dataset Sources - **Paper [optional]:** lcs.bath.ac.uk/~jjb/ftp/CaliskanSemantics-Arxiv.pdf **BibTeX:** ```bibtex @article{DBLP:journals/corr/IslamBN16, author = {Aylin Caliskan Islam and Joanna J. Bryson and Arvind Narayanan}, title = {Semantics derived automatically from language corpora necessarily contain human biases}, journal = {CoRR}, volume = {abs/1608.07187}, year = {2016}, url = {http://arxiv.org/abs/1608.07187}, eprinttype = {arXiv}, eprint = {1608.07187}, timestamp = {Sat, 23 Jan 2021 01:20:12 +0100}, biburl = {https://dblp.org/rec/journals/corr/IslamBN16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
gsh3729/coco_cropped
--- dataset_info: features: - name: image_id dtype: int64 - name: image struct: - name: bytes dtype: binary - name: width dtype: int64 - name: height dtype: int64 splits: - name: train num_bytes: 113905 num_examples: 8 download_size: 119586 dataset_size: 113905 configs: - config_name: default data_files: - split: train path: data/train-* ---
316usman/thematic5b_rr
--- dataset_info: features: - name: text dtype: string - name: document_url dtype: string - name: source_url dtype: string - name: num_tokens dtype: int64 splits: - name: train num_bytes: 199955207.0 num_examples: 297173 download_size: 70332375 dataset_size: 199955207.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
joey234/mmlu-sociology-neg-prepend-verbal
--- configs: - config_name: default data_files: - split: dev path: data/dev-* - split: test path: data/test-* dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: class_label: names: '0': A '1': B '2': C '3': D - name: negate_openai_prompt struct: - name: content dtype: string - name: role dtype: string - name: neg_question dtype: string - name: fewshot_context dtype: string - name: ori_prompt dtype: string - name: neg_prompt dtype: string - name: fewshot_context_neg dtype: string - name: fewshot_context_ori dtype: string splits: - name: dev num_bytes: 7988 num_examples: 5 - name: test num_bytes: 2058578 num_examples: 201 download_size: 238425 dataset_size: 2066566 --- # Dataset Card for "mmlu-sociology-neg-prepend-verbal" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mask-distilled-libri-one-sec-cv12/chunk_1
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: logits sequence: float32 splits: - name: train num_bytes: 316615918.5402795 num_examples: 9876 download_size: 258755745 dataset_size: 316615918.5402795 configs: - config_name: default data_files: - split: train path: data/train-* ---
pesc101/spyder-ide-lbl
--- dataset_info: features: - name: meta_data struct: - name: contains_class dtype: bool - name: contains_function dtype: bool - name: end_line dtype: int64 - name: file_imports sequence: string - name: file_name dtype: string - name: module dtype: string - name: start_line dtype: int64 - name: code dtype: string - name: question dtype: string - name: answer dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 28961152 num_examples: 8095 - name: test num_bytes: 66006 num_examples: 23 download_size: 8144155 dataset_size: 29027158 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_6
--- dataset_info: features: - name: text dtype: string - name: conversation_id dtype: int64 - name: embedding sequence: float64 - name: cluster dtype: int64 splits: - name: train num_bytes: 8687931 num_examples: 1007 download_size: 2237412 dataset_size: 8687931 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "python-code-instructions-18k-alpaca-standardized_cluster_6" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Seongill/Trivia_missing_5_small_masked
--- dataset_info: features: - name: question dtype: string - name: answers sequence: string - name: ctxs list: - name: hasanswer dtype: bool - name: id dtype: string - name: score dtype: float64 - name: text dtype: string - name: title dtype: string - name: has_answer dtype: bool - name: masked_query dtype: string - name: query_embedding sequence: float32 splits: - name: train num_bytes: 25528947 num_examples: 3771 download_size: 22550201 dataset_size: 25528947 configs: - config_name: default data_files: - split: train path: data/train-* ---
osbm/zenodo
--- pretty_name: Download Zenodo Dataset files --- # Download zenodo dataset files using huggingface datasets You can download a specific file from the Zenodo dataset using the following code: Zenodo id : 5172018 File name : FDB-17-fragmentset.smi.gz ```python from datasets import load_dataset load_dataset("osbm/zenodo", "5172018_FDB-17-fragmentset.smi.gz") ``` This command will also copy the file into your current directory so that you can use it directly. Here is an example notebook: https://gist.github.com/osbm/35a499f5756df22de30be20463aa6331 # Contribution [The huggingface repository](https://huggingface.co/datasets/osbm/zenodo) is actually a mirror of the github repository [osbm/zenodo](https://github.com/osbm/huggingface-zenodo-datasets). If you want to open an issue or PR, please do it on the github repository. I chose to do it this way because I wanted to use github actions. Currently only github action is mirroring the repository to huggingface. 😅
heliosprime/twitter_dataset_1713209602
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 22364 num_examples: 60 download_size: 20006 dataset_size: 22364 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "twitter_dataset_1713209602" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AppleHarem/airi_bluearchive
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of airi (Blue Archive) This is the dataset of airi (Blue Archive), containing 79 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 79 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 212 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 243 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 79 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 79 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 79 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 212 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 212 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 176 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 243 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 243 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
santhosh/mlwiki-sentences
--- license: cc-by-sa-4.0 ---
tinyBenchmarks/tinyAI2_arc
--- language: - en dataset_info: config_name: ARC-Challenge features: - name: id dtype: string - name: question dtype: string - name: choices sequence: - name: text dtype: string - name: label dtype: string - name: answerKey dtype: string - name: input_formatted dtype: string splits: - name: train num_bytes: 4776965 num_examples: 1119 - name: test num_bytes: 496912 num_examples: 100 - name: validation num_bytes: 1281856 num_examples: 299 download_size: 1154855 dataset_size: 6555733 configs: - config_name: ARC-Challenge data_files: - split: train path: ARC-Challenge/train-* - split: test path: ARC-Challenge/test-* - split: validation path: ARC-Challenge/validation-* task_categories: - question-answering pretty_name: tinyArc size_categories: - n<1K multilinguality: - monolingual source_datasets: - allenai/ai2_arc task_ids: - open-domain-qa - multiple-choice-qa language_bcp47: - en-US --- # tinyAI2_arc Welcome to tinyAI2_arc! This dataset serves as a concise version of the [AI2_arc challenge dataset](https://huggingface.co/datasets/allenai/ai2_arc), offering a subset of 100 data points selected from the original compilation. tinyAI2_arc is designed to enable users to efficiently estimate the performance of a large language model (LLM) with reduced dataset size, saving computational resources while maintaining the essence of the ARC challenge evaluation. ## Features - **Compact Dataset:** With only 100 data points, tinyAI2_arc provides a swift and efficient way to evaluate your LLM's performance against a benchmark set, maintaining the essence of the original ARC challenge dataset. - **Compatibility:** tinyAI2_arc is compatible with evaluation using the [lm evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness/), but can also be integrated into your custom pipeline. See below for more details. ## Model Evaluation Users looking to evaluate a new model with tinyAI2_arc can use the [lm evaluation harness (v0.4.1 or later)](https://github.com/EleutherAI/lm-evaluation-harness/). Simply replace `dataset_path: allenai/ai2_arc` with `dataset_path: tinyBenchmarks/tinyAI2_arc` in the file `lm-evaluation-harness/lm_eval/tasks/arc/arc_easy.yaml` and run your evaluation harness as usual, using the `--log_samples` argument: ```shell lm_eval --model hf --model_args pretrained="<your-model>" --tasks=arc_challenge --batch_size=1 --num_fewshot=25 --output_path=<output_path> --log_samples ``` Alternatively, the tinyAI2_arc can be integrated into any other pipeline by downloading the data via ```python from datasets import load_dataset tiny_data = load_dataset('tinyBenchmarks/tinyAI2_arc', 'ARC-Challenge')['test'] ``` Now, `tiny_data` contains the 100 subsampled data points with the same features as the original dataset, as well as an additional field containing the preformatted data points. The preformatted data points follow the formatting used in the [open llm leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) including the respective in-context examples. When using the lm evaluation harness, you can then estimate your LLM's performance using the following code. First, ensure you have the tinyBenchmarks package installed: ```shell pip install git+https://github.com/felipemaiapolo/tinyBenchmarks ``` Then, use the code snippet below for the evaluation: ```python import numpy as np import tinyBenchmarks as tb ### Score vector y = # your original score vector ### Parameters benchmark = 'arc' ### Evaluation tb.evaluate(y, benchmark) ``` This process will help you estimate the performance of your LLM against the tinyAI2_arc dataset, providing a streamlined approach to benchmarking. Please be aware that evaluating on multiple GPUs can change the order of outputs in the lm evaluation harness. Ordering your score vector following the original order in tinyAI2_arc will be necessary to use the tinyBenchmarks library. For more detailed instructions on evaluating new models and computing scores, please refer to the comprehensive guides available at [lm evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness/) and [tinyBenchmarks GitHub](https://github.com/felipemaiapolo/tinyBenchmarks). Happy benchmarking! ## More tinyBenchmarks **Open LLM leaderboard**: [tiny MMLU](https://huggingface.co/datasets/tinyBenchmarks/tinyMMLU), [tiny Winogrande](https://huggingface.co/datasets/tinyBenchmarks/tinyWinogrande), [tiny Hellaswag](https://huggingface.co/datasets/tinyBenchmarks/tinyHellaswag), [tiny TruthfulQA](https://huggingface.co/datasets/tinyBenchmarks/tinyTruthfulQA), [tiny GSM8k](https://huggingface.co/datasets/tinyBenchmarks/tinyGSM8k) **AlpacaEval**: [tiny AlpacaEval](https://huggingface.co/datasets/tinyBenchmarks/tinyAlpacaEval) **HELM-lite**: _work-in-progress_ ## Citation @article{polo2024tinybenchmarks, title={tinyBenchmarks: evaluating LLMs with fewer examples}, author={Felipe Maia Polo and Lucas Weber and Leshem Choshen and Yuekai Sun and Gongjun Xu and Mikhail Yurochkin}, year={2024}, eprint={2402.14992}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{allenai:arc, author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord}, title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, journal = {arXiv:1803.05457v1}, year = {2018}, }
open-llm-leaderboard/details_Test157t__Prima-LelantaclesV6.25-7b
--- pretty_name: Evaluation run of Test157t/Prima-LelantaclesV6.25-7b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [Test157t/Prima-LelantaclesV6.25-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6.25-7b)\ \ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Test157t__Prima-LelantaclesV6.25-7b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-03-12T06:40:34.340598](https://huggingface.co/datasets/open-llm-leaderboard/details_Test157t__Prima-LelantaclesV6.25-7b/blob/main/results_2024-03-12T06-40-34.340598.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6479869212572227,\n\ \ \"acc_stderr\": 0.032212425535783476,\n \"acc_norm\": 0.6488546794400761,\n\ \ \"acc_norm_stderr\": 0.03287124442527795,\n \"mc1\": 0.5177478580171359,\n\ \ \"mc1_stderr\": 0.017492470843075356,\n \"mc2\": 0.6744414664863697,\n\ \ \"mc2_stderr\": 0.015117300796412379\n },\n \"harness|arc:challenge|25\"\ : {\n \"acc\": 0.6723549488054608,\n \"acc_stderr\": 0.013715847940719339,\n\ \ \"acc_norm\": 0.6911262798634812,\n \"acc_norm_stderr\": 0.013501770929344003\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7066321449910377,\n\ \ \"acc_stderr\": 0.004543750480065779,\n \"acc_norm\": 0.8729336785500896,\n\ \ \"acc_norm_stderr\": 0.003323665964412193\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \ \ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\ \ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\ \ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\ \ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\ \ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \ \ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700918,\n\ \ \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700918\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\ \ \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n\ \ \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\ : 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\ : {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \ \ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n \ \ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n\ \ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \ \ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\"\ : {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n\ \ \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n\ \ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n\ \ \"acc_stderr\": 0.048108401480826346,\n \"acc_norm\": 0.37254901960784315,\n\ \ \"acc_norm_stderr\": 0.048108401480826346\n },\n \"harness|hendrycksTest-computer_security|5\"\ : {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \ \ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n \ \ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n\ \ \"acc_stderr\": 0.03227834510146267,\n \"acc_norm\": 0.5787234042553191,\n\ \ \"acc_norm_stderr\": 0.03227834510146267\n },\n \"harness|hendrycksTest-econometrics|5\"\ : {\n \"acc\": 0.5175438596491229,\n \"acc_stderr\": 0.04700708033551038,\n\ \ \"acc_norm\": 0.5175438596491229,\n \"acc_norm_stderr\": 0.04700708033551038\n\ \ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\ : 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\"\ : 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n\ \ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n\ \ \"acc_stderr\": 0.025331202438944433,\n \"acc_norm\": 0.41005291005291006,\n\ \ \"acc_norm_stderr\": 0.025331202438944433\n },\n \"harness|hendrycksTest-formal_logic|5\"\ : {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n\ \ \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n\ \ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n\ \ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \ \ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\"\ : {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181012,\n\ \ \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181012\n\ \ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\ : 0.541871921182266,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\"\ : 0.541871921182266,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n\ \ \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\"\ : 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\ \ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.031922715695483,\n\ \ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.031922715695483\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\ : 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\ \ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \ \ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\ \ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n\ \ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.34814814814814815,\n \"acc_stderr\": 0.029045600290616255,\n \ \ \"acc_norm\": 0.34814814814814815,\n \"acc_norm_stderr\": 0.029045600290616255\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342856,\n\ \ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342856\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\ acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.8385321100917431,\n \"acc_stderr\": 0.015776239256163265,\n \"\ acc_norm\": 0.8385321100917431,\n \"acc_norm_stderr\": 0.015776239256163265\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.5555555555555556,\n \"acc_stderr\": 0.03388857118502325,\n \"\ acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.03388857118502325\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.8186274509803921,\n \"acc_stderr\": 0.027044621719474082,\n \"\ acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.027044621719474082\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601457,\n \ \ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601457\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\ \ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\ \ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\ \ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516303,\n \"\ acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516303\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\ \ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\ \ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\ \ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\ \ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\ \ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\ \ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\ \ \"acc_stderr\": 0.0225090339370778,\n \"acc_norm\": 0.8632478632478633,\n\ \ \"acc_norm_stderr\": 0.0225090339370778\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \ \ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n\ \ \"acc_stderr\": 0.013816335389973138,\n \"acc_norm\": 0.8173690932311622,\n\ \ \"acc_norm_stderr\": 0.013816335389973138\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n\ \ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41787709497206704,\n\ \ \"acc_stderr\": 0.016495400635820084,\n \"acc_norm\": 0.41787709497206704,\n\ \ \"acc_norm_stderr\": 0.016495400635820084\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.026090162504279053,\n\ \ \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.026090162504279053\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\ \ \"acc_stderr\": 0.025922371788818774,\n \"acc_norm\": 0.7041800643086816,\n\ \ \"acc_norm_stderr\": 0.025922371788818774\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135107,\n\ \ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135107\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.5141843971631206,\n \"acc_stderr\": 0.02981549448368206,\n \ \ \"acc_norm\": 0.5141843971631206,\n \"acc_norm_stderr\": 0.02981549448368206\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4621903520208605,\n\ \ \"acc_stderr\": 0.012733671880342506,\n \"acc_norm\": 0.4621903520208605,\n\ \ \"acc_norm_stderr\": 0.012733671880342506\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.6654411764705882,\n \"acc_stderr\": 0.0286619962023353,\n\ \ \"acc_norm\": 0.6654411764705882,\n \"acc_norm_stderr\": 0.0286619962023353\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.6503267973856209,\n \"acc_stderr\": 0.01929196189506638,\n \ \ \"acc_norm\": 0.6503267973856209,\n \"acc_norm_stderr\": 0.01929196189506638\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\ \ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \ \ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128448,\n\ \ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128448\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n\ \ \"acc_stderr\": 0.026508590656233264,\n \"acc_norm\": 0.8308457711442786,\n\ \ \"acc_norm_stderr\": 0.026508590656233264\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \ \ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n\ \ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n\ \ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\ \ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5177478580171359,\n\ \ \"mc1_stderr\": 0.017492470843075356,\n \"mc2\": 0.6744414664863697,\n\ \ \"mc2_stderr\": 0.015117300796412379\n },\n \"harness|winogrande|5\"\ : {\n \"acc\": 0.8263614838200474,\n \"acc_stderr\": 0.010646116480331\n\ \ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6034874905231236,\n \ \ \"acc_stderr\": 0.013474258584033345\n }\n}\n```" repo_url: https://huggingface.co/Test157t/Prima-LelantaclesV6.25-7b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|arc:challenge|25_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-03-12T06-40-34.340598.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|gsm8k|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hellaswag|10_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-management|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-03-12T06-40-34.340598.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-management|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-virology|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T06-40-34.340598.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|truthfulqa:mc|0_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-03-12T06-40-34.340598.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_03_12T06_40_34.340598 path: - '**/details_harness|winogrande|5_2024-03-12T06-40-34.340598.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-03-12T06-40-34.340598.parquet' - config_name: results data_files: - split: 2024_03_12T06_40_34.340598 path: - results_2024-03-12T06-40-34.340598.parquet - split: latest path: - results_2024-03-12T06-40-34.340598.parquet --- # Dataset Card for Evaluation run of Test157t/Prima-LelantaclesV6.25-7b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Test157t/Prima-LelantaclesV6.25-7b](https://huggingface.co/Test157t/Prima-LelantaclesV6.25-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Test157t__Prima-LelantaclesV6.25-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-03-12T06:40:34.340598](https://huggingface.co/datasets/open-llm-leaderboard/details_Test157t__Prima-LelantaclesV6.25-7b/blob/main/results_2024-03-12T06-40-34.340598.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6479869212572227, "acc_stderr": 0.032212425535783476, "acc_norm": 0.6488546794400761, "acc_norm_stderr": 0.03287124442527795, "mc1": 0.5177478580171359, "mc1_stderr": 0.017492470843075356, "mc2": 0.6744414664863697, "mc2_stderr": 0.015117300796412379 }, "harness|arc:challenge|25": { "acc": 0.6723549488054608, "acc_stderr": 0.013715847940719339, "acc_norm": 0.6911262798634812, "acc_norm_stderr": 0.013501770929344003 }, "harness|hellaswag|10": { "acc": 0.7066321449910377, "acc_stderr": 0.004543750480065779, "acc_norm": 0.8729336785500896, "acc_norm_stderr": 0.003323665964412193 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595853, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595853 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7169811320754716, "acc_stderr": 0.027724236492700918, "acc_norm": 0.7169811320754716, "acc_norm_stderr": 0.027724236492700918 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7569444444444444, "acc_stderr": 0.0358687928008034, "acc_norm": 0.7569444444444444, "acc_norm_stderr": 0.0358687928008034 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.048108401480826346, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.048108401480826346 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.04093601807403326, "acc_norm": 0.79, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146267, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146267 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5175438596491229, "acc_stderr": 0.04700708033551038, "acc_norm": 0.5175438596491229, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.593103448275862, "acc_stderr": 0.04093793981266236, "acc_norm": 0.593103448275862, "acc_norm_stderr": 0.04093793981266236 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.025331202438944433, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.025331202438944433 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4603174603174603, "acc_stderr": 0.04458029125470973, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7741935483870968, "acc_stderr": 0.023785577884181012, "acc_norm": 0.7741935483870968, "acc_norm_stderr": 0.023785577884181012 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.541871921182266, "acc_stderr": 0.03505630140785741, "acc_norm": 0.541871921182266, "acc_norm_stderr": 0.03505630140785741 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7878787878787878, "acc_stderr": 0.031922715695483, "acc_norm": 0.7878787878787878, "acc_norm_stderr": 0.031922715695483 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.797979797979798, "acc_stderr": 0.02860620428922987, "acc_norm": 0.797979797979798, "acc_norm_stderr": 0.02860620428922987 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6717948717948717, "acc_stderr": 0.023807633198657262, "acc_norm": 0.6717948717948717, "acc_norm_stderr": 0.023807633198657262 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34814814814814815, "acc_stderr": 0.029045600290616255, "acc_norm": 0.34814814814814815, "acc_norm_stderr": 0.029045600290616255 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7016806722689075, "acc_stderr": 0.029719142876342856, "acc_norm": 0.7016806722689075, "acc_norm_stderr": 0.029719142876342856 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.37748344370860926, "acc_stderr": 0.03958027231121569, "acc_norm": 0.37748344370860926, "acc_norm_stderr": 0.03958027231121569 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8385321100917431, "acc_stderr": 0.015776239256163265, "acc_norm": 0.8385321100917431, "acc_norm_stderr": 0.015776239256163265 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5555555555555556, "acc_stderr": 0.03388857118502325, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.03388857118502325 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8186274509803921, "acc_stderr": 0.027044621719474082, "acc_norm": 0.8186274509803921, "acc_norm_stderr": 0.027044621719474082 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7974683544303798, "acc_stderr": 0.026160568246601457, "acc_norm": 0.7974683544303798, "acc_norm_stderr": 0.026160568246601457 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7709923664122137, "acc_stderr": 0.036853466317118506, "acc_norm": 0.7709923664122137, "acc_norm_stderr": 0.036853466317118506 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7520661157024794, "acc_stderr": 0.03941897526516303, "acc_norm": 0.7520661157024794, "acc_norm_stderr": 0.03941897526516303 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.040191074725573483, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.040191074725573483 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7475728155339806, "acc_stderr": 0.04301250399690878, "acc_norm": 0.7475728155339806, "acc_norm_stderr": 0.04301250399690878 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8632478632478633, "acc_stderr": 0.0225090339370778, "acc_norm": 0.8632478632478633, "acc_norm_stderr": 0.0225090339370778 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8173690932311622, "acc_stderr": 0.013816335389973138, "acc_norm": 0.8173690932311622, "acc_norm_stderr": 0.013816335389973138 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.023948512905468365, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.023948512905468365 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.41787709497206704, "acc_stderr": 0.016495400635820084, "acc_norm": 0.41787709497206704, "acc_norm_stderr": 0.016495400635820084 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7058823529411765, "acc_stderr": 0.026090162504279053, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.026090162504279053 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7041800643086816, "acc_stderr": 0.025922371788818774, "acc_norm": 0.7041800643086816, "acc_norm_stderr": 0.025922371788818774 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7376543209876543, "acc_stderr": 0.024477222856135107, "acc_norm": 0.7376543209876543, "acc_norm_stderr": 0.024477222856135107 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5141843971631206, "acc_stderr": 0.02981549448368206, "acc_norm": 0.5141843971631206, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4621903520208605, "acc_stderr": 0.012733671880342506, "acc_norm": 0.4621903520208605, "acc_norm_stderr": 0.012733671880342506 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6654411764705882, "acc_stderr": 0.0286619962023353, "acc_norm": 0.6654411764705882, "acc_norm_stderr": 0.0286619962023353 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6503267973856209, "acc_stderr": 0.01929196189506638, "acc_norm": 0.6503267973856209, "acc_norm_stderr": 0.01929196189506638 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7, "acc_stderr": 0.04389311454644287, "acc_norm": 0.7, "acc_norm_stderr": 0.04389311454644287 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128448, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128448 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8308457711442786, "acc_stderr": 0.026508590656233264, "acc_norm": 0.8308457711442786, "acc_norm_stderr": 0.026508590656233264 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.03588702812826371, "acc_norm": 0.85, "acc_norm_stderr": 0.03588702812826371 }, "harness|hendrycksTest-virology|5": { "acc": 0.5060240963855421, "acc_stderr": 0.03892212195333045, "acc_norm": 0.5060240963855421, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.5177478580171359, "mc1_stderr": 0.017492470843075356, "mc2": 0.6744414664863697, "mc2_stderr": 0.015117300796412379 }, "harness|winogrande|5": { "acc": 0.8263614838200474, "acc_stderr": 0.010646116480331 }, "harness|gsm8k|5": { "acc": 0.6034874905231236, "acc_stderr": 0.013474258584033345 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
polinaeterna/test_string_to_dict
--- dataset_info: features: - name: x dtype: int64 - name: y dtype: int64 splits: - name: train num_bytes: 48 num_examples: 3 download_size: 0 dataset_size: 48 builder_config: data_files: - split: train pattern: data/train-* --- # Dataset Card for "test_string_to_dict" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
weaverbirdllm/fin_report_summarization_cn
--- license: apache-2.0 dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 6492671 num_examples: 1820 download_size: 3818593 dataset_size: 6492671 configs: - config_name: default data_files: - split: train path: data/train-* ---
Rizqi/emotion-raw
--- license: afl-3.0 ---
Seanxh/twitter_dataset_1713184998
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 28305 num_examples: 65 download_size: 15073 dataset_size: 28305 configs: - config_name: default data_files: - split: train path: data/train-* ---
Abdelkareem/arabic_tweets_classification
--- dataset_info: features: - name: Date dtype: string - name: Time dtype: string - name: Date Time dtype: string - name: URL dtype: string - name: Tweet Text dtype: string - name: Cleaned Text dtype: string - name: User Name dtype: string - name: Location dtype: string - name: 'Replied Tweet ID ' dtype: float64 - name: Replied Tweet User ID dtype: float64 - name: Replied Tweet User name dtype: string - name: Coordinates dtype: float64 - name: Retweet Count dtype: float64 - name: Favorite Count dtype: int64 - name: Favorited dtype: string - name: Label dtype: string splits: - name: train num_bytes: 7469621 num_examples: 13240 download_size: 3109198 dataset_size: 7469621 --- # Dataset Card for "arabic_tweets_classification" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davanstrien/autotrain-data-new-datasets-2
Invalid username or password.
umuth/keywording
--- license: apache-2.0 task_categories: - image-feature-extraction language: - en tags: - keywording - vgg16 - kosmos2 - tagging size_categories: - n<1K ---
bandoos/test_fr
--- task_categories: - token-classification language: - fr --- just a test to see how this works
freshpearYoon/v3_train_free_2
--- dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 15366775104 num_examples: 10000 download_size: 2173407181 dataset_size: 15366775104 configs: - config_name: default data_files: - split: train path: data/train-* ---
jlbaker361/league-maybe-dream-50
--- dataset_info: features: - name: image dtype: image - name: prompt dtype: string - name: seed dtype: int64 - name: steps dtype: int64 splits: - name: train num_bytes: 69130340.0 num_examples: 72 download_size: 69129592 dataset_size: 69130340.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
KhalfounMehdi/mura_dataset_processed_224px
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': abnormal '1': normal splits: - name: train num_bytes: 997379264.375 num_examples: 40005 download_size: 997532653 dataset_size: 997379264.375 --- # Dataset Card for "mura_dataset_processed_224px" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
CVasNLPExperiments/TinyImagenet_800_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_800
--- dataset_info: features: - name: id dtype: int64 - name: prompt dtype: string - name: true_label dtype: string - name: prediction dtype: string splits: - name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices num_bytes: 352566 num_examples: 800 download_size: 102562 dataset_size: 352566 --- # Dataset Card for "TinyImagenet_800_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_800" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
htrhdgfh/heatherdataset
--- license: openrail ---
open-llm-leaderboard/details_openbmb__Eurus-RM-7b
--- pretty_name: Evaluation run of openbmb/Eurus-RM-7b dataset_summary: "Dataset automatically created during the evaluation run of model\ \ [openbmb/Eurus-RM-7b](https://huggingface.co/openbmb/Eurus-RM-7b) on the [Open\ \ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\ \nThe dataset is composed of 63 configuration, each one coresponding to one of the\ \ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\ \ found as a specific split in each configuration, the split being named using the\ \ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\ \nAn additional configuration \"results\" store all the aggregated results of the\ \ run (and is used to compute and display the aggregated metrics on the [Open LLM\ \ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\ \nTo load the details from a run, you can for instance do the following:\n```python\n\ from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openbmb__Eurus-RM-7b\"\ ,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\ These are the [latest results from run 2024-04-09T06:50:00.788799](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__Eurus-RM-7b/blob/main/results_2024-04-09T06-50-00.788799.json)(note\ \ that their might be results for other tasks in the repos if successive evals didn't\ \ cover the same tasks. You find each in the results and the \"latest\" split for\ \ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2539903952511765,\n\ \ \"acc_stderr\": 0.03098008602736365,\n \"acc_norm\": 0.25551266190257,\n\ \ \"acc_norm_stderr\": 0.03181160531793125,\n \"mc1\": 0.2350061199510404,\n\ \ \"mc1_stderr\": 0.014843061507731611,\n \"mc2\": NaN,\n \"\ mc2_stderr\": NaN\n },\n \"harness|arc:challenge|25\": {\n \"acc\"\ : 0.22098976109215018,\n \"acc_stderr\": 0.012124929206818258,\n \"\ acc_norm\": 0.27474402730375425,\n \"acc_norm_stderr\": 0.013044617212771227\n\ \ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.27066321449910374,\n\ \ \"acc_stderr\": 0.004433943894764252,\n \"acc_norm\": 0.3196574387572197,\n\ \ \"acc_norm_stderr\": 0.004653907471785631\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\ : {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \ \ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \ \ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.26666666666666666,\n\ \ \"acc_stderr\": 0.03820169914517904,\n \"acc_norm\": 0.26666666666666666,\n\ \ \"acc_norm_stderr\": 0.03820169914517904\n },\n \"harness|hendrycksTest-astronomy|5\"\ : {\n \"acc\": 0.2236842105263158,\n \"acc_stderr\": 0.03391160934343601,\n\ \ \"acc_norm\": 0.2236842105263158,\n \"acc_norm_stderr\": 0.03391160934343601\n\ \ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n\ \ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \ \ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\ : {\n \"acc\": 0.2792452830188679,\n \"acc_stderr\": 0.027611163402399715,\n\ \ \"acc_norm\": 0.2792452830188679,\n \"acc_norm_stderr\": 0.027611163402399715\n\ \ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2916666666666667,\n\ \ \"acc_stderr\": 0.03800968060554858,\n \"acc_norm\": 0.2916666666666667,\n\ \ \"acc_norm_stderr\": 0.03800968060554858\n },\n \"harness|hendrycksTest-college_chemistry|5\"\ : {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \ \ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n\ \ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.31,\n\ \ \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \ \ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_mathematics|5\"\ : {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \ \ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \ \ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\ \ \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n\ \ \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\"\ : {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\ \ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\ \ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\ \ 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\": 0.23,\n\ \ \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\ : {\n \"acc\": 0.28936170212765955,\n \"acc_stderr\": 0.029644006577009618,\n\ \ \"acc_norm\": 0.28936170212765955,\n \"acc_norm_stderr\": 0.029644006577009618\n\ \ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\ \ \"acc_stderr\": 0.03999423879281338,\n \"acc_norm\": 0.23684210526315788,\n\ \ \"acc_norm_stderr\": 0.03999423879281338\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\ : {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.038061426873099935,\n\ \ \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.038061426873099935\n\ \ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\ : 0.23809523809523808,\n \"acc_stderr\": 0.021935878081184763,\n \"\ acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.021935878081184763\n\ \ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.25396825396825395,\n\ \ \"acc_stderr\": 0.03893259610604674,\n \"acc_norm\": 0.25396825396825395,\n\ \ \"acc_norm_stderr\": 0.03893259610604674\n },\n \"harness|hendrycksTest-global_facts|5\"\ : {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \ \ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n \ \ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.22903225806451613,\n\ \ \"acc_stderr\": 0.02390491431178265,\n \"acc_norm\": 0.22903225806451613,\n\ \ \"acc_norm_stderr\": 0.02390491431178265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\ : {\n \"acc\": 0.22167487684729065,\n \"acc_stderr\": 0.029225575892489607,\n\ \ \"acc_norm\": 0.22167487684729065,\n \"acc_norm_stderr\": 0.029225575892489607\n\ \ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \ \ \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\"\ : 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\ : {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055952,\n\ \ \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055952\n\ \ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\ : 0.22727272727272727,\n \"acc_stderr\": 0.0298575156733864,\n \"\ acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.0298575156733864\n\ \ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\ \ \"acc\": 0.2694300518134715,\n \"acc_stderr\": 0.03201867122877794,\n\ \ \"acc_norm\": 0.2694300518134715,\n \"acc_norm_stderr\": 0.03201867122877794\n\ \ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \ \ \"acc\": 0.2512820512820513,\n \"acc_stderr\": 0.021992016662370547,\n\ \ \"acc_norm\": 0.2512820512820513,\n \"acc_norm_stderr\": 0.021992016662370547\n\ \ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\ acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712166,\n \ \ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712166\n\ \ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \ \ \"acc\": 0.22268907563025211,\n \"acc_stderr\": 0.027025433498882378,\n\ \ \"acc_norm\": 0.22268907563025211,\n \"acc_norm_stderr\": 0.027025433498882378\n\ \ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\ : 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389024,\n \"\ acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389024\n\ \ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\ : 0.24036697247706423,\n \"acc_stderr\": 0.01832060732096407,\n \"\ acc_norm\": 0.24036697247706423,\n \"acc_norm_stderr\": 0.01832060732096407\n\ \ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\ : 0.3333333333333333,\n \"acc_stderr\": 0.03214952147802748,\n \"\ acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.03214952147802748\n\ \ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\ : 0.17647058823529413,\n \"acc_stderr\": 0.02675640153807896,\n \"\ acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.02675640153807896\n\ \ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\ acc\": 0.20253164556962025,\n \"acc_stderr\": 0.026160568246601453,\n \ \ \"acc_norm\": 0.20253164556962025,\n \"acc_norm_stderr\": 0.026160568246601453\n\ \ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.24663677130044842,\n\ \ \"acc_stderr\": 0.028930413120910856,\n \"acc_norm\": 0.24663677130044842,\n\ \ \"acc_norm_stderr\": 0.028930413120910856\n },\n \"harness|hendrycksTest-human_sexuality|5\"\ : {\n \"acc\": 0.25190839694656486,\n \"acc_stderr\": 0.03807387116306085,\n\ \ \"acc_norm\": 0.25190839694656486,\n \"acc_norm_stderr\": 0.03807387116306085\n\ \ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\ \ 0.3140495867768595,\n \"acc_stderr\": 0.042369647530410184,\n \"\ acc_norm\": 0.3140495867768595,\n \"acc_norm_stderr\": 0.042369647530410184\n\ \ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n\ \ \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.21296296296296297,\n\ \ \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\ : {\n \"acc\": 0.24539877300613497,\n \"acc_stderr\": 0.03380939813943354,\n\ \ \"acc_norm\": 0.24539877300613497,\n \"acc_norm_stderr\": 0.03380939813943354\n\ \ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.26785714285714285,\n\ \ \"acc_stderr\": 0.04203277291467762,\n \"acc_norm\": 0.26785714285714285,\n\ \ \"acc_norm_stderr\": 0.04203277291467762\n },\n \"harness|hendrycksTest-management|5\"\ : {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690878,\n\ \ \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690878\n\ \ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.21794871794871795,\n\ \ \"acc_stderr\": 0.02704685763071664,\n \"acc_norm\": 0.21794871794871795,\n\ \ \"acc_norm_stderr\": 0.02704685763071664\n },\n \"harness|hendrycksTest-medical_genetics|5\"\ : {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \ \ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \ \ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.28607918263090676,\n\ \ \"acc_stderr\": 0.01616087140512753,\n \"acc_norm\": 0.28607918263090676,\n\ \ \"acc_norm_stderr\": 0.01616087140512753\n },\n \"harness|hendrycksTest-moral_disputes|5\"\ : {\n \"acc\": 0.22832369942196531,\n \"acc_stderr\": 0.022598703804321624,\n\ \ \"acc_norm\": 0.22832369942196531,\n \"acc_norm_stderr\": 0.022598703804321624\n\ \ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2201117318435754,\n\ \ \"acc_stderr\": 0.013856994024227173,\n \"acc_norm\": 0.2201117318435754,\n\ \ \"acc_norm_stderr\": 0.013856994024227173\n },\n \"harness|hendrycksTest-nutrition|5\"\ : {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.024739981355113596,\n\ \ \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.024739981355113596\n\ \ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2572347266881029,\n\ \ \"acc_stderr\": 0.024826171289250888,\n \"acc_norm\": 0.2572347266881029,\n\ \ \"acc_norm_stderr\": 0.024826171289250888\n },\n \"harness|hendrycksTest-prehistory|5\"\ : {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.02346842983245114,\n\ \ \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.02346842983245114\n\ \ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\ acc\": 0.2624113475177305,\n \"acc_stderr\": 0.026244920349843014,\n \ \ \"acc_norm\": 0.2624113475177305,\n \"acc_norm_stderr\": 0.026244920349843014\n\ \ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23402868318122555,\n\ \ \"acc_stderr\": 0.010813585552659684,\n \"acc_norm\": 0.23402868318122555,\n\ \ \"acc_norm_stderr\": 0.010813585552659684\n },\n \"harness|hendrycksTest-professional_medicine|5\"\ : {\n \"acc\": 0.2426470588235294,\n \"acc_stderr\": 0.026040662474201278,\n\ \ \"acc_norm\": 0.2426470588235294,\n \"acc_norm_stderr\": 0.026040662474201278\n\ \ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\ acc\": 0.2826797385620915,\n \"acc_stderr\": 0.01821726955205344,\n \ \ \"acc_norm\": 0.2826797385620915,\n \"acc_norm_stderr\": 0.01821726955205344\n\ \ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2909090909090909,\n\ \ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.2909090909090909,\n\ \ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\ : {\n \"acc\": 0.2571428571428571,\n \"acc_stderr\": 0.027979823538744546,\n\ \ \"acc_norm\": 0.2571428571428571,\n \"acc_norm_stderr\": 0.027979823538744546\n\ \ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n\ \ \"acc_stderr\": 0.030147775935409217,\n \"acc_norm\": 0.23880597014925373,\n\ \ \"acc_norm_stderr\": 0.030147775935409217\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\ : {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \ \ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n \ \ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2710843373493976,\n\ \ \"acc_stderr\": 0.03460579907553027,\n \"acc_norm\": 0.2710843373493976,\n\ \ \"acc_norm_stderr\": 0.03460579907553027\n },\n \"harness|hendrycksTest-world_religions|5\"\ : {\n \"acc\": 0.23976608187134502,\n \"acc_stderr\": 0.03274485211946956,\n\ \ \"acc_norm\": 0.23976608187134502,\n \"acc_norm_stderr\": 0.03274485211946956\n\ \ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2350061199510404,\n\ \ \"mc1_stderr\": 0.014843061507731611,\n \"mc2\": NaN,\n \"\ mc2_stderr\": NaN\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5209155485398579,\n\ \ \"acc_stderr\": 0.014040185494212945\n },\n \"harness|gsm8k|5\":\ \ {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```" repo_url: https://huggingface.co/openbmb/Eurus-RM-7b leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard point_of_contact: clementine@hf.co configs: - config_name: harness_arc_challenge_25 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|arc:challenge|25_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|arc:challenge|25_2024-04-09T06-50-00.788799.parquet' - config_name: harness_gsm8k_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|gsm8k|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|gsm8k|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hellaswag_10 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hellaswag|10_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hellaswag|10_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-management|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-50-00.788799.parquet' - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_abstract_algebra_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_anatomy_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_astronomy_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_business_ethics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_clinical_knowledge_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_biology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_chemistry_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_computer_science_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_mathematics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_medicine_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_college_physics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_computer_security_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_conceptual_physics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_econometrics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_electrical_engineering_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_elementary_mathematics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_formal_logic_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_global_facts_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_biology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_chemistry_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_computer_science_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_european_history_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_geography_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_government_and_politics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_macroeconomics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_mathematics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_microeconomics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_physics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_psychology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_statistics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_us_history_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_high_school_world_history_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_human_aging_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_human_sexuality_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_international_law_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_jurisprudence_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_logical_fallacies_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_machine_learning_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_management_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-management|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-management|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_marketing_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_medical_genetics_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_miscellaneous_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_moral_disputes_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_moral_scenarios_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_nutrition_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_philosophy_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_prehistory_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_professional_accounting_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_professional_law_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_professional_medicine_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_professional_psychology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_public_relations_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_security_studies_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_sociology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_us_foreign_policy_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_virology_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_hendrycksTest_world_religions_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-50-00.788799.parquet' - config_name: harness_truthfulqa_mc_0 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|truthfulqa:mc|0_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|truthfulqa:mc|0_2024-04-09T06-50-00.788799.parquet' - config_name: harness_winogrande_5 data_files: - split: 2024_04_09T06_50_00.788799 path: - '**/details_harness|winogrande|5_2024-04-09T06-50-00.788799.parquet' - split: latest path: - '**/details_harness|winogrande|5_2024-04-09T06-50-00.788799.parquet' - config_name: results data_files: - split: 2024_04_09T06_50_00.788799 path: - results_2024-04-09T06-50-00.788799.parquet - split: latest path: - results_2024-04-09T06-50-00.788799.parquet --- # Dataset Card for Evaluation run of openbmb/Eurus-RM-7b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [openbmb/Eurus-RM-7b](https://huggingface.co/openbmb/Eurus-RM-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_openbmb__Eurus-RM-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-04-09T06:50:00.788799](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__Eurus-RM-7b/blob/main/results_2024-04-09T06-50-00.788799.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2539903952511765, "acc_stderr": 0.03098008602736365, "acc_norm": 0.25551266190257, "acc_norm_stderr": 0.03181160531793125, "mc1": 0.2350061199510404, "mc1_stderr": 0.014843061507731611, "mc2": NaN, "mc2_stderr": NaN }, "harness|arc:challenge|25": { "acc": 0.22098976109215018, "acc_stderr": 0.012124929206818258, "acc_norm": 0.27474402730375425, "acc_norm_stderr": 0.013044617212771227 }, "harness|hellaswag|10": { "acc": 0.27066321449910374, "acc_stderr": 0.004433943894764252, "acc_norm": 0.3196574387572197, "acc_norm_stderr": 0.004653907471785631 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.26666666666666666, "acc_stderr": 0.03820169914517904, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.03820169914517904 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.2236842105263158, "acc_stderr": 0.03391160934343601, "acc_norm": 0.2236842105263158, "acc_norm_stderr": 0.03391160934343601 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2792452830188679, "acc_stderr": 0.027611163402399715, "acc_norm": 0.2792452830188679, "acc_norm_stderr": 0.027611163402399715 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2916666666666667, "acc_stderr": 0.03800968060554858, "acc_norm": 0.2916666666666667, "acc_norm_stderr": 0.03800968060554858 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.22, "acc_stderr": 0.0416333199893227, "acc_norm": 0.22, "acc_norm_stderr": 0.0416333199893227 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.21965317919075145, "acc_stderr": 0.031568093627031744, "acc_norm": 0.21965317919075145, "acc_norm_stderr": 0.031568093627031744 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.28431372549019607, "acc_stderr": 0.04488482852329017, "acc_norm": 0.28431372549019607, "acc_norm_stderr": 0.04488482852329017 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.23, "acc_stderr": 0.042295258468165044, "acc_norm": 0.23, "acc_norm_stderr": 0.042295258468165044 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.28936170212765955, "acc_stderr": 0.029644006577009618, "acc_norm": 0.28936170212765955, "acc_norm_stderr": 0.029644006577009618 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.03999423879281338, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.03999423879281338 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.296551724137931, "acc_stderr": 0.038061426873099935, "acc_norm": 0.296551724137931, "acc_norm_stderr": 0.038061426873099935 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.23809523809523808, "acc_stderr": 0.021935878081184763, "acc_norm": 0.23809523809523808, "acc_norm_stderr": 0.021935878081184763 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.25396825396825395, "acc_stderr": 0.03893259610604674, "acc_norm": 0.25396825396825395, "acc_norm_stderr": 0.03893259610604674 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.29, "acc_stderr": 0.04560480215720683, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720683 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.22903225806451613, "acc_stderr": 0.02390491431178265, "acc_norm": 0.22903225806451613, "acc_norm_stderr": 0.02390491431178265 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.22167487684729065, "acc_stderr": 0.029225575892489607, "acc_norm": 0.22167487684729065, "acc_norm_stderr": 0.029225575892489607 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.27, "acc_stderr": 0.044619604333847415, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847415 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.24242424242424243, "acc_stderr": 0.03346409881055952, "acc_norm": 0.24242424242424243, "acc_norm_stderr": 0.03346409881055952 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.22727272727272727, "acc_stderr": 0.0298575156733864, "acc_norm": 0.22727272727272727, "acc_norm_stderr": 0.0298575156733864 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.2694300518134715, "acc_stderr": 0.03201867122877794, "acc_norm": 0.2694300518134715, "acc_norm_stderr": 0.03201867122877794 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2512820512820513, "acc_stderr": 0.021992016662370547, "acc_norm": 0.2512820512820513, "acc_norm_stderr": 0.021992016662370547 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.25925925925925924, "acc_stderr": 0.026719240783712166, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.026719240783712166 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.22268907563025211, "acc_stderr": 0.027025433498882378, "acc_norm": 0.22268907563025211, "acc_norm_stderr": 0.027025433498882378 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2847682119205298, "acc_stderr": 0.03684881521389024, "acc_norm": 0.2847682119205298, "acc_norm_stderr": 0.03684881521389024 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.24036697247706423, "acc_stderr": 0.01832060732096407, "acc_norm": 0.24036697247706423, "acc_norm_stderr": 0.01832060732096407 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.03214952147802748, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.03214952147802748 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.17647058823529413, "acc_stderr": 0.02675640153807896, "acc_norm": 0.17647058823529413, "acc_norm_stderr": 0.02675640153807896 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.20253164556962025, "acc_stderr": 0.026160568246601453, "acc_norm": 0.20253164556962025, "acc_norm_stderr": 0.026160568246601453 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.24663677130044842, "acc_stderr": 0.028930413120910856, "acc_norm": 0.24663677130044842, "acc_norm_stderr": 0.028930413120910856 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.25190839694656486, "acc_stderr": 0.03807387116306085, "acc_norm": 0.25190839694656486, "acc_norm_stderr": 0.03807387116306085 }, "harness|hendrycksTest-international_law|5": { "acc": 0.3140495867768595, "acc_stderr": 0.042369647530410184, "acc_norm": 0.3140495867768595, "acc_norm_stderr": 0.042369647530410184 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.21296296296296297, "acc_stderr": 0.039578354719809805, "acc_norm": 0.21296296296296297, "acc_norm_stderr": 0.039578354719809805 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.24539877300613497, "acc_stderr": 0.03380939813943354, "acc_norm": 0.24539877300613497, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.26785714285714285, "acc_stderr": 0.04203277291467762, "acc_norm": 0.26785714285714285, "acc_norm_stderr": 0.04203277291467762 }, "harness|hendrycksTest-management|5": { "acc": 0.2524271844660194, "acc_stderr": 0.04301250399690878, "acc_norm": 0.2524271844660194, "acc_norm_stderr": 0.04301250399690878 }, "harness|hendrycksTest-marketing|5": { "acc": 0.21794871794871795, "acc_stderr": 0.02704685763071664, "acc_norm": 0.21794871794871795, "acc_norm_stderr": 0.02704685763071664 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.27, "acc_stderr": 0.044619604333847394, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.28607918263090676, "acc_stderr": 0.01616087140512753, "acc_norm": 0.28607918263090676, "acc_norm_stderr": 0.01616087140512753 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.22832369942196531, "acc_stderr": 0.022598703804321624, "acc_norm": 0.22832369942196531, "acc_norm_stderr": 0.022598703804321624 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2201117318435754, "acc_stderr": 0.013856994024227173, "acc_norm": 0.2201117318435754, "acc_norm_stderr": 0.013856994024227173 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.24836601307189543, "acc_stderr": 0.024739981355113596, "acc_norm": 0.24836601307189543, "acc_norm_stderr": 0.024739981355113596 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2572347266881029, "acc_stderr": 0.024826171289250888, "acc_norm": 0.2572347266881029, "acc_norm_stderr": 0.024826171289250888 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.23148148148148148, "acc_stderr": 0.02346842983245114, "acc_norm": 0.23148148148148148, "acc_norm_stderr": 0.02346842983245114 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2624113475177305, "acc_stderr": 0.026244920349843014, "acc_norm": 0.2624113475177305, "acc_norm_stderr": 0.026244920349843014 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.23402868318122555, "acc_stderr": 0.010813585552659684, "acc_norm": 0.23402868318122555, "acc_norm_stderr": 0.010813585552659684 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.2426470588235294, "acc_stderr": 0.026040662474201278, "acc_norm": 0.2426470588235294, "acc_norm_stderr": 0.026040662474201278 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2826797385620915, "acc_stderr": 0.01821726955205344, "acc_norm": 0.2826797385620915, "acc_norm_stderr": 0.01821726955205344 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2909090909090909, "acc_stderr": 0.04350271442923243, "acc_norm": 0.2909090909090909, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.2571428571428571, "acc_stderr": 0.027979823538744546, "acc_norm": 0.2571428571428571, "acc_norm_stderr": 0.027979823538744546 }, "harness|hendrycksTest-sociology|5": { "acc": 0.23880597014925373, "acc_stderr": 0.030147775935409217, "acc_norm": 0.23880597014925373, "acc_norm_stderr": 0.030147775935409217 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-virology|5": { "acc": 0.2710843373493976, "acc_stderr": 0.03460579907553027, "acc_norm": 0.2710843373493976, "acc_norm_stderr": 0.03460579907553027 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.23976608187134502, "acc_stderr": 0.03274485211946956, "acc_norm": 0.23976608187134502, "acc_norm_stderr": 0.03274485211946956 }, "harness|truthfulqa:mc|0": { "mc1": 0.2350061199510404, "mc1_stderr": 0.014843061507731611, "mc2": NaN, "mc2_stderr": NaN }, "harness|winogrande|5": { "acc": 0.5209155485398579, "acc_stderr": 0.014040185494212945 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
crazysteeaam/Party_Affairs_Response
--- task_categories: - text-generation language: - zh tags: - legal size_categories: - 100K<n<1M --- Data from [https://wenda.12371.cn/liebiao.php](https://wenda.12371.cn/liebiao.php)
bofenghuang/asr-dummy
--- dataset_info: config_name: fr features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: duration dtype: float64 - name: split dtype: string splits: - name: test num_bytes: 35664300.82401466 num_examples: 120 download_size: 0 dataset_size: 35664300.82401466 configs: - config_name: fr data_files: - split: test path: fr/test-* --- # Dataset Card for "asr-dummy" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhengyun21/PMC-Patients-MetaData
--- license: cc-by-nc-sa-4.0 language: - en tags: - medical size_categories: - 100K<n<1M --- Meta data for PMC-Patients that might facilitate reproduction or usage of our dataset, consisting of the following files (most of which can be derived from our main files above). ## PMIDs.json PMIDs of articles from which PMC-Patients are extracted. List of string, length 140,897. ## train_PMIDs.json & dev_PMIDs.json & test_PMIDs.json PMIDs of articles in training / dev / test split. List of string. ## train_patient_uids.json & dev_patient_uids.json & test_patient_uids.json Patient_uids of notes in training / dev / test split. List of string. ## patient2article_relevance.json Full patient-to-article dataset. A dict where the keys are `patient_uid` of queries and each entry is a list of `PMID`, representing articles relevant to the query. The 3-point relevance can be obtained by checking whether the `PMID` is in `PMIDs.json`. ## patient2patient_similarity.json Full patient-to-patient similarity dataset. A dict where the keys are `patient_uid` of queries and each entry is a list of `patient_uid`, representing similar patients to the query. The 3-point similarity can be obtained by checking whether the similar patient share the `PMID` (the string before '-' in `patient_uid`) with the query patient. ## PMID2Mesh.json Dict of PMIDs to MeSH terms of the article. ## MeSH_Humans_patient_uids.json `patient_uid` of the patients in PMC-Patients-Humans (extracted from articles with "Humans" MeSH term). List of string. ## PMC-Patients_citations.json Citations for all articles we used to collect our dataset. A dict where the keys are `patient_uid` and each entry is the citation of the source article. ## human_PMIDs.json PMIDs of the 500 randomly sampled articles for human evaluation. List of string. ## PMC-Patients_human_eval.json Expert annotation results of the 500 articles in `human_PMIDs.json`, including manually annotated patient note, demographics, and relations of the top 5 retrieved articles / patients. List of dict, and the keys are almost identical to `PMC-Patients.json`, with the exception of `human_patient_id` and `human_patient_uid`. The relational annotations are different from automatic ones. They are strings indicating on which dimension(s) are the patient-article / patient-patient pair relevant / similar. "0", "1", "2", and "3" represent "Irrelevant", "Diagnosis", "Test", "Treatment" in ReCDS-PAR, and represent "Dissimilar", "Features", "Outcomes", "Exposure" in ReCDS-PPR. Note that a pair can be relevant / similar on multiple dimensions at the same time. ## PAR_PMIDs.json PMIDs of the 11.7M articles used as PAR corpus. List of string.
fathan/ije_offensive_lid
--- license: afl-3.0 --- The following is a code-mixed Indonesian-Javanese-English Twitter dataset for offensive language identification.
NobodyExistsOnTheInternet/GiftedConvoFixedMath
--- license: mit ---
sayakpaul/hf-codegen-v2
--- dataset_info: features: - name: index dtype: int64 - name: repo_id dtype: string - name: file_path dtype: string - name: content dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 51358178715 num_examples: 370000 download_size: 11423577734 dataset_size: 51358178715 --- # Dataset Card for "hf-codegen-v2" Dataset generated with the code from: https://github.com/sayakpaul/hf-codegen.
xxman123/movie_poster
--- license: apache-2.0 ---
Mediocreatmybest/example_quotes
--- license: cc language: - en pretty_name: Example dataset of quotes --- # Dataset: Example Quotes Starting example structure for how to store quotes. ## Structure - **Quote**: - *Type*: String - *Description*: The primary content. - **Antagonist**: - *Type*: String - *Description*: The individual responsible for said quote. - **Antagonists_id**: - *Type*: Integer - *Description*: A unique identifier associated with the quoting antagonist. Used for generating URLs or internal referencing. - **URL**: - *Type*: List of Strings (separated by `|`) - *Description*: A list of URLs offering context or sources for the quote. A single quote may have multiple URLs or a sinlge one. - **Source_type**: - *Type*: List of Strings (separated by `|`) - *Description*: The medium the quote sourced from. "online news", "oneline", etc. - **Year**: - *Type*: Integer - *Description*: simple integer representing the year the quote was made. - **Tags**: - *Type*: List of Strings (separated by `|`) - *Description*: Miscellaneous tags linked to the quote, for categorization or filtering.
freshpearYoon/v3_train_free_concat_4
--- dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 3842752720 num_examples: 2500 download_size: 1916461019 dataset_size: 3842752720 configs: - config_name: default data_files: - split: train path: data/train-* ---
bh8648/esg3
--- dataset_info: features: - name: Major Category dtype: string - name: Middle Categoty dtype: string - name: Small Category dtype: string - name: output dtype: string splits: - name: train num_bytes: 222466 num_examples: 56 download_size: 107170 dataset_size: 222466 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "esg3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
BangumiBase/natsumesbookoffriends
--- license: mit tags: - art size_categories: - 1K<n<10K --- # Bangumi Image Base of Natsume's Book Of Friends This is the image base of bangumi Natsume's Book of Friends, we detected 60 characters, 6311 images in total. The full dataset is [here](all.zip). **Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability). Here is the characters' preview: | # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 | |:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------| | 0 | 2720 | [Download](0/dataset.zip) | ![preview 1](0/preview_1.png) | ![preview 2](0/preview_2.png) | ![preview 3](0/preview_3.png) | ![preview 4](0/preview_4.png) | ![preview 5](0/preview_5.png) | ![preview 6](0/preview_6.png) | ![preview 7](0/preview_7.png) | ![preview 8](0/preview_8.png) | | 1 | 274 | [Download](1/dataset.zip) | ![preview 1](1/preview_1.png) | ![preview 2](1/preview_2.png) | ![preview 3](1/preview_3.png) | ![preview 4](1/preview_4.png) | ![preview 5](1/preview_5.png) | ![preview 6](1/preview_6.png) | ![preview 7](1/preview_7.png) | ![preview 8](1/preview_8.png) | | 2 | 199 | [Download](2/dataset.zip) | ![preview 1](2/preview_1.png) | ![preview 2](2/preview_2.png) | ![preview 3](2/preview_3.png) | ![preview 4](2/preview_4.png) | ![preview 5](2/preview_5.png) | ![preview 6](2/preview_6.png) | ![preview 7](2/preview_7.png) | ![preview 8](2/preview_8.png) | | 3 | 233 | [Download](3/dataset.zip) | ![preview 1](3/preview_1.png) | ![preview 2](3/preview_2.png) | ![preview 3](3/preview_3.png) | ![preview 4](3/preview_4.png) | ![preview 5](3/preview_5.png) | ![preview 6](3/preview_6.png) | ![preview 7](3/preview_7.png) | ![preview 8](3/preview_8.png) | | 4 | 102 | [Download](4/dataset.zip) | ![preview 1](4/preview_1.png) | ![preview 2](4/preview_2.png) | ![preview 3](4/preview_3.png) | ![preview 4](4/preview_4.png) | ![preview 5](4/preview_5.png) | ![preview 6](4/preview_6.png) | ![preview 7](4/preview_7.png) | ![preview 8](4/preview_8.png) | | 5 | 52 | [Download](5/dataset.zip) | ![preview 1](5/preview_1.png) | ![preview 2](5/preview_2.png) | ![preview 3](5/preview_3.png) | ![preview 4](5/preview_4.png) | ![preview 5](5/preview_5.png) | ![preview 6](5/preview_6.png) | ![preview 7](5/preview_7.png) | ![preview 8](5/preview_8.png) | | 6 | 89 | [Download](6/dataset.zip) | ![preview 1](6/preview_1.png) | ![preview 2](6/preview_2.png) | ![preview 3](6/preview_3.png) | ![preview 4](6/preview_4.png) | ![preview 5](6/preview_5.png) | ![preview 6](6/preview_6.png) | ![preview 7](6/preview_7.png) | ![preview 8](6/preview_8.png) | | 7 | 110 | [Download](7/dataset.zip) | ![preview 1](7/preview_1.png) | ![preview 2](7/preview_2.png) | ![preview 3](7/preview_3.png) | ![preview 4](7/preview_4.png) | ![preview 5](7/preview_5.png) | ![preview 6](7/preview_6.png) | ![preview 7](7/preview_7.png) | ![preview 8](7/preview_8.png) | | 8 | 373 | [Download](8/dataset.zip) | ![preview 1](8/preview_1.png) | ![preview 2](8/preview_2.png) | ![preview 3](8/preview_3.png) | ![preview 4](8/preview_4.png) | ![preview 5](8/preview_5.png) | ![preview 6](8/preview_6.png) | ![preview 7](8/preview_7.png) | ![preview 8](8/preview_8.png) | | 9 | 74 | [Download](9/dataset.zip) | ![preview 1](9/preview_1.png) | ![preview 2](9/preview_2.png) | ![preview 3](9/preview_3.png) | ![preview 4](9/preview_4.png) | ![preview 5](9/preview_5.png) | ![preview 6](9/preview_6.png) | ![preview 7](9/preview_7.png) | ![preview 8](9/preview_8.png) | | 10 | 58 | [Download](10/dataset.zip) | ![preview 1](10/preview_1.png) | ![preview 2](10/preview_2.png) | ![preview 3](10/preview_3.png) | ![preview 4](10/preview_4.png) | ![preview 5](10/preview_5.png) | ![preview 6](10/preview_6.png) | ![preview 7](10/preview_7.png) | ![preview 8](10/preview_8.png) | | 11 | 48 | [Download](11/dataset.zip) | ![preview 1](11/preview_1.png) | ![preview 2](11/preview_2.png) | ![preview 3](11/preview_3.png) | ![preview 4](11/preview_4.png) | ![preview 5](11/preview_5.png) | ![preview 6](11/preview_6.png) | ![preview 7](11/preview_7.png) | ![preview 8](11/preview_8.png) | | 12 | 150 | [Download](12/dataset.zip) | ![preview 1](12/preview_1.png) | ![preview 2](12/preview_2.png) | ![preview 3](12/preview_3.png) | ![preview 4](12/preview_4.png) | ![preview 5](12/preview_5.png) | ![preview 6](12/preview_6.png) | ![preview 7](12/preview_7.png) | ![preview 8](12/preview_8.png) | | 13 | 39 | [Download](13/dataset.zip) | ![preview 1](13/preview_1.png) | ![preview 2](13/preview_2.png) | ![preview 3](13/preview_3.png) | ![preview 4](13/preview_4.png) | ![preview 5](13/preview_5.png) | ![preview 6](13/preview_6.png) | ![preview 7](13/preview_7.png) | ![preview 8](13/preview_8.png) | | 14 | 31 | [Download](14/dataset.zip) | ![preview 1](14/preview_1.png) | ![preview 2](14/preview_2.png) | ![preview 3](14/preview_3.png) | ![preview 4](14/preview_4.png) | ![preview 5](14/preview_5.png) | ![preview 6](14/preview_6.png) | ![preview 7](14/preview_7.png) | ![preview 8](14/preview_8.png) | | 15 | 89 | [Download](15/dataset.zip) | ![preview 1](15/preview_1.png) | ![preview 2](15/preview_2.png) | ![preview 3](15/preview_3.png) | ![preview 4](15/preview_4.png) | ![preview 5](15/preview_5.png) | ![preview 6](15/preview_6.png) | ![preview 7](15/preview_7.png) | ![preview 8](15/preview_8.png) | | 16 | 37 | [Download](16/dataset.zip) | ![preview 1](16/preview_1.png) | ![preview 2](16/preview_2.png) | ![preview 3](16/preview_3.png) | ![preview 4](16/preview_4.png) | ![preview 5](16/preview_5.png) | ![preview 6](16/preview_6.png) | ![preview 7](16/preview_7.png) | ![preview 8](16/preview_8.png) | | 17 | 82 | [Download](17/dataset.zip) | ![preview 1](17/preview_1.png) | ![preview 2](17/preview_2.png) | ![preview 3](17/preview_3.png) | ![preview 4](17/preview_4.png) | ![preview 5](17/preview_5.png) | ![preview 6](17/preview_6.png) | ![preview 7](17/preview_7.png) | ![preview 8](17/preview_8.png) | | 18 | 87 | [Download](18/dataset.zip) | ![preview 1](18/preview_1.png) | ![preview 2](18/preview_2.png) | ![preview 3](18/preview_3.png) | ![preview 4](18/preview_4.png) | ![preview 5](18/preview_5.png) | ![preview 6](18/preview_6.png) | ![preview 7](18/preview_7.png) | ![preview 8](18/preview_8.png) | | 19 | 163 | [Download](19/dataset.zip) | ![preview 1](19/preview_1.png) | ![preview 2](19/preview_2.png) | ![preview 3](19/preview_3.png) | ![preview 4](19/preview_4.png) | ![preview 5](19/preview_5.png) | ![preview 6](19/preview_6.png) | ![preview 7](19/preview_7.png) | ![preview 8](19/preview_8.png) | | 20 | 123 | [Download](20/dataset.zip) | ![preview 1](20/preview_1.png) | ![preview 2](20/preview_2.png) | ![preview 3](20/preview_3.png) | ![preview 4](20/preview_4.png) | ![preview 5](20/preview_5.png) | ![preview 6](20/preview_6.png) | ![preview 7](20/preview_7.png) | ![preview 8](20/preview_8.png) | | 21 | 43 | [Download](21/dataset.zip) | ![preview 1](21/preview_1.png) | ![preview 2](21/preview_2.png) | ![preview 3](21/preview_3.png) | ![preview 4](21/preview_4.png) | ![preview 5](21/preview_5.png) | ![preview 6](21/preview_6.png) | ![preview 7](21/preview_7.png) | ![preview 8](21/preview_8.png) | | 22 | 84 | [Download](22/dataset.zip) | ![preview 1](22/preview_1.png) | ![preview 2](22/preview_2.png) | ![preview 3](22/preview_3.png) | ![preview 4](22/preview_4.png) | ![preview 5](22/preview_5.png) | ![preview 6](22/preview_6.png) | ![preview 7](22/preview_7.png) | ![preview 8](22/preview_8.png) | | 23 | 33 | [Download](23/dataset.zip) | ![preview 1](23/preview_1.png) | ![preview 2](23/preview_2.png) | ![preview 3](23/preview_3.png) | ![preview 4](23/preview_4.png) | ![preview 5](23/preview_5.png) | ![preview 6](23/preview_6.png) | ![preview 7](23/preview_7.png) | ![preview 8](23/preview_8.png) | | 24 | 16 | [Download](24/dataset.zip) | ![preview 1](24/preview_1.png) | ![preview 2](24/preview_2.png) | ![preview 3](24/preview_3.png) | ![preview 4](24/preview_4.png) | ![preview 5](24/preview_5.png) | ![preview 6](24/preview_6.png) | ![preview 7](24/preview_7.png) | ![preview 8](24/preview_8.png) | | 25 | 18 | [Download](25/dataset.zip) | ![preview 1](25/preview_1.png) | ![preview 2](25/preview_2.png) | ![preview 3](25/preview_3.png) | ![preview 4](25/preview_4.png) | ![preview 5](25/preview_5.png) | ![preview 6](25/preview_6.png) | ![preview 7](25/preview_7.png) | ![preview 8](25/preview_8.png) | | 26 | 33 | [Download](26/dataset.zip) | ![preview 1](26/preview_1.png) | ![preview 2](26/preview_2.png) | ![preview 3](26/preview_3.png) | ![preview 4](26/preview_4.png) | ![preview 5](26/preview_5.png) | ![preview 6](26/preview_6.png) | ![preview 7](26/preview_7.png) | ![preview 8](26/preview_8.png) | | 27 | 23 | [Download](27/dataset.zip) | ![preview 1](27/preview_1.png) | ![preview 2](27/preview_2.png) | ![preview 3](27/preview_3.png) | ![preview 4](27/preview_4.png) | ![preview 5](27/preview_5.png) | ![preview 6](27/preview_6.png) | ![preview 7](27/preview_7.png) | ![preview 8](27/preview_8.png) | | 28 | 20 | [Download](28/dataset.zip) | ![preview 1](28/preview_1.png) | ![preview 2](28/preview_2.png) | ![preview 3](28/preview_3.png) | ![preview 4](28/preview_4.png) | ![preview 5](28/preview_5.png) | ![preview 6](28/preview_6.png) | ![preview 7](28/preview_7.png) | ![preview 8](28/preview_8.png) | | 29 | 21 | [Download](29/dataset.zip) | ![preview 1](29/preview_1.png) | ![preview 2](29/preview_2.png) | ![preview 3](29/preview_3.png) | ![preview 4](29/preview_4.png) | ![preview 5](29/preview_5.png) | ![preview 6](29/preview_6.png) | ![preview 7](29/preview_7.png) | ![preview 8](29/preview_8.png) | | 30 | 34 | [Download](30/dataset.zip) | ![preview 1](30/preview_1.png) | ![preview 2](30/preview_2.png) | ![preview 3](30/preview_3.png) | ![preview 4](30/preview_4.png) | ![preview 5](30/preview_5.png) | ![preview 6](30/preview_6.png) | ![preview 7](30/preview_7.png) | ![preview 8](30/preview_8.png) | | 31 | 26 | [Download](31/dataset.zip) | ![preview 1](31/preview_1.png) | ![preview 2](31/preview_2.png) | ![preview 3](31/preview_3.png) | ![preview 4](31/preview_4.png) | ![preview 5](31/preview_5.png) | ![preview 6](31/preview_6.png) | ![preview 7](31/preview_7.png) | ![preview 8](31/preview_8.png) | | 32 | 20 | [Download](32/dataset.zip) | ![preview 1](32/preview_1.png) | ![preview 2](32/preview_2.png) | ![preview 3](32/preview_3.png) | ![preview 4](32/preview_4.png) | ![preview 5](32/preview_5.png) | ![preview 6](32/preview_6.png) | ![preview 7](32/preview_7.png) | ![preview 8](32/preview_8.png) | | 33 | 22 | [Download](33/dataset.zip) | ![preview 1](33/preview_1.png) | ![preview 2](33/preview_2.png) | ![preview 3](33/preview_3.png) | ![preview 4](33/preview_4.png) | ![preview 5](33/preview_5.png) | ![preview 6](33/preview_6.png) | ![preview 7](33/preview_7.png) | ![preview 8](33/preview_8.png) | | 34 | 20 | [Download](34/dataset.zip) | ![preview 1](34/preview_1.png) | ![preview 2](34/preview_2.png) | ![preview 3](34/preview_3.png) | ![preview 4](34/preview_4.png) | ![preview 5](34/preview_5.png) | ![preview 6](34/preview_6.png) | ![preview 7](34/preview_7.png) | ![preview 8](34/preview_8.png) | | 35 | 10 | [Download](35/dataset.zip) | ![preview 1](35/preview_1.png) | ![preview 2](35/preview_2.png) | ![preview 3](35/preview_3.png) | ![preview 4](35/preview_4.png) | ![preview 5](35/preview_5.png) | ![preview 6](35/preview_6.png) | ![preview 7](35/preview_7.png) | ![preview 8](35/preview_8.png) | | 36 | 27 | [Download](36/dataset.zip) | ![preview 1](36/preview_1.png) | ![preview 2](36/preview_2.png) | ![preview 3](36/preview_3.png) | ![preview 4](36/preview_4.png) | ![preview 5](36/preview_5.png) | ![preview 6](36/preview_6.png) | ![preview 7](36/preview_7.png) | ![preview 8](36/preview_8.png) | | 37 | 9 | [Download](37/dataset.zip) | ![preview 1](37/preview_1.png) | ![preview 2](37/preview_2.png) | ![preview 3](37/preview_3.png) | ![preview 4](37/preview_4.png) | ![preview 5](37/preview_5.png) | ![preview 6](37/preview_6.png) | ![preview 7](37/preview_7.png) | ![preview 8](37/preview_8.png) | | 38 | 16 | [Download](38/dataset.zip) | ![preview 1](38/preview_1.png) | ![preview 2](38/preview_2.png) | ![preview 3](38/preview_3.png) | ![preview 4](38/preview_4.png) | ![preview 5](38/preview_5.png) | ![preview 6](38/preview_6.png) | ![preview 7](38/preview_7.png) | ![preview 8](38/preview_8.png) | | 39 | 104 | [Download](39/dataset.zip) | ![preview 1](39/preview_1.png) | ![preview 2](39/preview_2.png) | ![preview 3](39/preview_3.png) | ![preview 4](39/preview_4.png) | ![preview 5](39/preview_5.png) | ![preview 6](39/preview_6.png) | ![preview 7](39/preview_7.png) | ![preview 8](39/preview_8.png) | | 40 | 22 | [Download](40/dataset.zip) | ![preview 1](40/preview_1.png) | ![preview 2](40/preview_2.png) | ![preview 3](40/preview_3.png) | ![preview 4](40/preview_4.png) | ![preview 5](40/preview_5.png) | ![preview 6](40/preview_6.png) | ![preview 7](40/preview_7.png) | ![preview 8](40/preview_8.png) | | 41 | 61 | [Download](41/dataset.zip) | ![preview 1](41/preview_1.png) | ![preview 2](41/preview_2.png) | ![preview 3](41/preview_3.png) | ![preview 4](41/preview_4.png) | ![preview 5](41/preview_5.png) | ![preview 6](41/preview_6.png) | ![preview 7](41/preview_7.png) | ![preview 8](41/preview_8.png) | | 42 | 11 | [Download](42/dataset.zip) | ![preview 1](42/preview_1.png) | ![preview 2](42/preview_2.png) | ![preview 3](42/preview_3.png) | ![preview 4](42/preview_4.png) | ![preview 5](42/preview_5.png) | ![preview 6](42/preview_6.png) | ![preview 7](42/preview_7.png) | ![preview 8](42/preview_8.png) | | 43 | 26 | [Download](43/dataset.zip) | ![preview 1](43/preview_1.png) | ![preview 2](43/preview_2.png) | ![preview 3](43/preview_3.png) | ![preview 4](43/preview_4.png) | ![preview 5](43/preview_5.png) | ![preview 6](43/preview_6.png) | ![preview 7](43/preview_7.png) | ![preview 8](43/preview_8.png) | | 44 | 42 | [Download](44/dataset.zip) | ![preview 1](44/preview_1.png) | ![preview 2](44/preview_2.png) | ![preview 3](44/preview_3.png) | ![preview 4](44/preview_4.png) | ![preview 5](44/preview_5.png) | ![preview 6](44/preview_6.png) | ![preview 7](44/preview_7.png) | ![preview 8](44/preview_8.png) | | 45 | 8 | [Download](45/dataset.zip) | ![preview 1](45/preview_1.png) | ![preview 2](45/preview_2.png) | ![preview 3](45/preview_3.png) | ![preview 4](45/preview_4.png) | ![preview 5](45/preview_5.png) | ![preview 6](45/preview_6.png) | ![preview 7](45/preview_7.png) | ![preview 8](45/preview_8.png) | | 46 | 9 | [Download](46/dataset.zip) | ![preview 1](46/preview_1.png) | ![preview 2](46/preview_2.png) | ![preview 3](46/preview_3.png) | ![preview 4](46/preview_4.png) | ![preview 5](46/preview_5.png) | ![preview 6](46/preview_6.png) | ![preview 7](46/preview_7.png) | ![preview 8](46/preview_8.png) | | 47 | 21 | [Download](47/dataset.zip) | ![preview 1](47/preview_1.png) | ![preview 2](47/preview_2.png) | ![preview 3](47/preview_3.png) | ![preview 4](47/preview_4.png) | ![preview 5](47/preview_5.png) | ![preview 6](47/preview_6.png) | ![preview 7](47/preview_7.png) | ![preview 8](47/preview_8.png) | | 48 | 8 | [Download](48/dataset.zip) | ![preview 1](48/preview_1.png) | ![preview 2](48/preview_2.png) | ![preview 3](48/preview_3.png) | ![preview 4](48/preview_4.png) | ![preview 5](48/preview_5.png) | ![preview 6](48/preview_6.png) | ![preview 7](48/preview_7.png) | ![preview 8](48/preview_8.png) | | 49 | 17 | [Download](49/dataset.zip) | ![preview 1](49/preview_1.png) | ![preview 2](49/preview_2.png) | ![preview 3](49/preview_3.png) | ![preview 4](49/preview_4.png) | ![preview 5](49/preview_5.png) | ![preview 6](49/preview_6.png) | ![preview 7](49/preview_7.png) | ![preview 8](49/preview_8.png) | | 50 | 17 | [Download](50/dataset.zip) | ![preview 1](50/preview_1.png) | ![preview 2](50/preview_2.png) | ![preview 3](50/preview_3.png) | ![preview 4](50/preview_4.png) | ![preview 5](50/preview_5.png) | ![preview 6](50/preview_6.png) | ![preview 7](50/preview_7.png) | ![preview 8](50/preview_8.png) | | 51 | 10 | [Download](51/dataset.zip) | ![preview 1](51/preview_1.png) | ![preview 2](51/preview_2.png) | ![preview 3](51/preview_3.png) | ![preview 4](51/preview_4.png) | ![preview 5](51/preview_5.png) | ![preview 6](51/preview_6.png) | ![preview 7](51/preview_7.png) | ![preview 8](51/preview_8.png) | | 52 | 28 | [Download](52/dataset.zip) | ![preview 1](52/preview_1.png) | ![preview 2](52/preview_2.png) | ![preview 3](52/preview_3.png) | ![preview 4](52/preview_4.png) | ![preview 5](52/preview_5.png) | ![preview 6](52/preview_6.png) | ![preview 7](52/preview_7.png) | ![preview 8](52/preview_8.png) | | 53 | 15 | [Download](53/dataset.zip) | ![preview 1](53/preview_1.png) | ![preview 2](53/preview_2.png) | ![preview 3](53/preview_3.png) | ![preview 4](53/preview_4.png) | ![preview 5](53/preview_5.png) | ![preview 6](53/preview_6.png) | ![preview 7](53/preview_7.png) | ![preview 8](53/preview_8.png) | | 54 | 102 | [Download](54/dataset.zip) | ![preview 1](54/preview_1.png) | ![preview 2](54/preview_2.png) | ![preview 3](54/preview_3.png) | ![preview 4](54/preview_4.png) | ![preview 5](54/preview_5.png) | ![preview 6](54/preview_6.png) | ![preview 7](54/preview_7.png) | ![preview 8](54/preview_8.png) | | 55 | 19 | [Download](55/dataset.zip) | ![preview 1](55/preview_1.png) | ![preview 2](55/preview_2.png) | ![preview 3](55/preview_3.png) | ![preview 4](55/preview_4.png) | ![preview 5](55/preview_5.png) | ![preview 6](55/preview_6.png) | ![preview 7](55/preview_7.png) | ![preview 8](55/preview_8.png) | | 56 | 15 | [Download](56/dataset.zip) | ![preview 1](56/preview_1.png) | ![preview 2](56/preview_2.png) | ![preview 3](56/preview_3.png) | ![preview 4](56/preview_4.png) | ![preview 5](56/preview_5.png) | ![preview 6](56/preview_6.png) | ![preview 7](56/preview_7.png) | ![preview 8](56/preview_8.png) | | 57 | 8 | [Download](57/dataset.zip) | ![preview 1](57/preview_1.png) | ![preview 2](57/preview_2.png) | ![preview 3](57/preview_3.png) | ![preview 4](57/preview_4.png) | ![preview 5](57/preview_5.png) | ![preview 6](57/preview_6.png) | ![preview 7](57/preview_7.png) | ![preview 8](57/preview_8.png) | | 58 | 9 | [Download](58/dataset.zip) | ![preview 1](58/preview_1.png) | ![preview 2](58/preview_2.png) | ![preview 3](58/preview_3.png) | ![preview 4](58/preview_4.png) | ![preview 5](58/preview_5.png) | ![preview 6](58/preview_6.png) | ![preview 7](58/preview_7.png) | ![preview 8](58/preview_8.png) | | noise | 151 | [Download](-1/dataset.zip) | ![preview 1](-1/preview_1.png) | ![preview 2](-1/preview_2.png) | ![preview 3](-1/preview_3.png) | ![preview 4](-1/preview_4.png) | ![preview 5](-1/preview_5.png) | ![preview 6](-1/preview_6.png) | ![preview 7](-1/preview_7.png) | ![preview 8](-1/preview_8.png) |
kevincluo/structure_wildfire_damage_classification
--- license: cc-by-4.0 configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': affected '1': destroyed '2': inaccessible '3': major '4': minor '5': no_damage splits: - name: train num_bytes: 125229532 num_examples: 355 download_size: 125234000 dataset_size: 125229532 language: - en tags: - climate - wildfire - image classification - damage assessment --- # Dataset Card for Structures Damaged by Wildfire **Homepage:** [Image Dataset of Structures Damaged by Wildfire in California 2020-2022](https://zenodo.org/record/8336570) ### Dataset Summary The dataset contains over 18,000 images of homes damaged by wildfire between 2020 and 2022 in California, USA, captured by the California Department of Forestry and Fire Protection (Cal Fire) during the damage assessment process. The dataset spans across more than 18 wildfire events, including the 2020 August Complex Fire, the first recorded "gigafire" event in California where the area burned exceeded 1 million acres. Each image, corresponding to a built structure, is classified by government damage assessors into 6 different categories: Inaccessible (image taken but no assessment made), No Damage, Affected (1-9%), Minor (10-25%), Major (26-50%), and Destroyed (>50%). While over 57,000 structures were evaluated during the damage assessment process, only about 18,000 contains images; additional data about the structures, such as the street address or structure materials, for both those with and without corresponding images can be accessed in the "Additional Attribute Data" file. The 18 wildfire events captured in the dataset are: - [AUG] August Complex (2020) - [BEA] Bear Fire (2020) - [BEU] BEU Lightning Complex Fire (2020) - [CAL] Caldor Fire (2021) - [CAS] Castle Fire (2020) - [CRE] Creek Fire (2020) - [DIN] DINS Statewide (Collection of Smaller Fires, 2021) - [DIX[ Dixie Fire (2021) - [FAI] Fairview Fire (2022) - [FOR] Fork Fire (2022) - [GLA] Glass Fire (2020) - [MIL] Mill Mountain Fire (2022) - [MON] Monument Fire (2021) - [MOS] Mosquito Fire (2022) - [POST] Post Fire (2020) - [SCU] SCU Complex Fire (2020) - [VAL] Valley Fire (2020) - [ZOG] Zogg Fire (2020) The author retrieved the data, originally published as GIS features layers, from from the publicly accessible CAL FIRE Hub, then subsequently processed it into image and tabular formats. The author collaborated with Cal Fire in working with the data, and has received explicit permission for republication. ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ``` { "affected": 0, "destroyed": 1, "inaccessible": 2, "major": 3, "minor": 4, "no_damage": 5, } ``` ### Data Splits | | train | |---------------|------:| | # of examples | 18,714 |
trawzified/khajiitspeak
--- license: gpl-3.0 task_categories: - text2text-generation language: - en pretty_name: khajiit_translations size_categories: - 10K<n<100K --- # Khajiit Translations Extracted from the BA Khajiit Speak Redux mod including patches, and the original Khajiit Speak mod. BA_KSR is overriding the original mod for improved translations.
mcipriano/stackoverflow-kubernetes-questions
--- license: cc-by-sa-4.0 task_categories: - question-answering - text-generation language: - en tags: - Kubernetes - Stackoverflow size_categories: - 10K<n<100K --- The purpose of this dataset is to provide the opportunity to perform any training, fine-tuning, etc. for any Language Model. In the 'data' folder, you will find the dataset in Parquet format, which is one of the formats used for these processes. In case it may be useful for other purposes, I have also included the dataset in CSV format. All data in this dataset were retrieved from the Stack Exchange network using the Stack Exchange Data explorer tool (https://github.com/StackExchange/StackExchange.DataExplorer). Specifically, the dataset contains all the Question-Answer pairs from Stack Overflow with Kubernetes tags. Specifically, in each Question-Answer pair, the Answer is the one with a positive and maximum score. Posts on Stack Overflow with negative scores have been excluded from the dataset.
aimona/stripchat-all-data
--- dataset_info: features: - name: output dtype: string - name: input dtype: string - name: instructions dtype: string splits: - name: train num_bytes: 647195713 num_examples: 30052 download_size: 247595296 dataset_size: 647195713 --- # Dataset Card for "stripchat-all-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
datasets-maintainers/audiofolder_two_configs_in_metadata
--- configs: - config_name: v1 data_dir: v1 drop_labels: true - config_name: v2 data_dir: v2 drop_labels: false duplicated_from: polinaeterna/audiofolder_two_configs_in_metadata ---
Jithendra-k/Flan-T5_interACT
--- dataset_info: features: - name: Prompt dtype: string - name: Response dtype: string - name: id dtype: string splits: - name: train num_bytes: 38228.89968321014 num_examples: 757 - name: test num_bytes: 9595.100316789863 num_examples: 190 download_size: 32480 dataset_size: 47824.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- This dataset is a part of Project InterACT (Multi model AI system) involving an object detection model and an LLM This is a custom built dataset solely built for finetuning the Google's Flan-T5 model. Dataset contains 2 attributes: Prompt and Response Prompt: Resembles a query or a statement given by a user Response: Resembles a list of keywords extracted as per the user query or statement This dataset is transformed from a csv file to this using this Google colab file: https://colab.research.google.com/drive/1Uc7gnn0QaVaYW2ohFBNuEHgdhCQPclbf?usp=sharing
huggingartists/upsahl
--- language: - en tags: - huggingartists - lyrics --- # Dataset Card for "huggingartists/upsahl" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [How to use](#how-to-use) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [About](#about) ## Dataset Description - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of the generated dataset:** 0.168635 MB <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/e0fa9b5bdd037ab75031dd7372d05cd6.1000x1000x1.jpg&#39;)"> </div> </div> <a href="https://huggingface.co/huggingartists/upsahl"> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> </a> <div style="text-align: center; font-size: 16px; font-weight: 800">UPSAHL</div> <a href="https://genius.com/artists/upsahl"> <div style="text-align: center; font-size: 14px;">@upsahl</div> </a> </div> ### Dataset Summary The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available [here](https://huggingface.co/huggingartists/upsahl). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages en ## How to use How to load this dataset directly with the datasets library: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/upsahl") ``` ## Dataset Structure An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..." } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits | train |validation|test| |------:|---------:|---:| |107| -| -| 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code: ```python from datasets import load_dataset, Dataset, DatasetDict import numpy as np datasets = load_dataset("huggingartists/upsahl") train_percentage = 0.9 validation_percentage = 0.07 test_percentage = 0.03 train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))]) datasets = DatasetDict( { 'train': Dataset.from_dict({'text': list(train)}), 'validation': Dataset.from_dict({'text': list(validation)}), 'test': Dataset.from_dict({'text': list(test)}) } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @InProceedings{huggingartists, author={Aleksey Korshuk} year=2021 } ``` ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
CyberHarem/miyuki_kantaicollection
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of miyuki/深雪/深雪 (Kantai Collection) This is the dataset of miyuki/深雪/深雪 (Kantai Collection), containing 491 images and their tags. The core tags of this character are `short_hair, black_hair, brown_eyes, brown_hair`, which are pruned in this dataset. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). ## List of Packages | Name | Images | Size | Download | Type | Description | |:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------| | raw | 491 | 311.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). | | 800 | 491 | 233.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. | | stage3-p480-800 | 878 | 408.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | | 1200 | 491 | 293.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. | | stage3-p480-1200 | 878 | 500.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/miyuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. | ### Load Raw Dataset with Waifuc We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code ```python import os import zipfile from huggingface_hub import hf_hub_download from waifuc.source import LocalSource # download raw archive file zip_file = hf_hub_download( repo_id='CyberHarem/miyuki_kantaicollection', repo_type='dataset', filename='dataset-raw.zip', ) # extract files to your directory dataset_dir = 'dataset_dir' os.makedirs(dataset_dir, exist_ok=True) with zipfile.ZipFile(zip_file, 'r') as zf: zf.extractall(dataset_dir) # load the dataset with waifuc source = LocalSource(dataset_dir) for item in source: print(item.image, item.meta['filename'], item.meta['tags']) ``` ## List of Clusters List of tag clustering result, maybe some outfits can be mined here. ### Raw Text Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_sailor_collar, blue_skirt, pleated_skirt, serafuku, simple_background, solo, white_background, black_socks, full_body, kneehighs, looking_at_viewer, grin, short_sleeves, standing | | 1 | 20 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_sailor_collar, blue_skirt, serafuku, solo, pleated_skirt, simple_background, white_background, cowboy_shot, looking_at_viewer, grin, blue_neckerchief, one-hour_drawing_challenge | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, black_socks, blue_sailor_collar, blue_skirt, pleated_skirt, serafuku, solo, short_sleeves, sidelocks, full_body, grey_footwear, medium_hair, shoes, standing, holding | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, adapted_turret, blue_sailor_collar, blue_skirt, machinery, pleated_skirt, rigging, serafuku, solo, torpedo_launcher, full_body, short_sleeves, sidelocks, smokestack, cannon, cloud, ocean, sky, black_socks, headset, standing_on_liquid | | 4 | 10 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, pleated_skirt, serafuku, solo, turret, machinery, looking_at_viewer, cannon, grin, white_background, blue_skirt, dated, open_mouth, sailor_collar, simple_background | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, blue_sailor_collar, serafuku, solo, upper_body, simple_background, white_background, open_mouth, blue_neckerchief, blush | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 2girls, pleated_skirt, serafuku, blue_sailor_collar, blue_skirt, open_mouth, solo_focus, closed_eyes, kneehighs, sitting | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blush, serafuku, shirt_lift, 1boy, hetero, navel, nipples, small_breasts, white_panties, solo_focus, open_mouth, pleated_skirt, simple_background, white_background, blue_skirt, military_uniform, panty_pull, sailor_collar, skirt_pull | | 8 | 9 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, official_alternate_costume, solo, white_shirt, bag, smile, suspenders, full_body, looking_at_viewer | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, simple_background, solo, white_background, barefoot, looking_at_viewer, school_swimsuit, full_body, black_one-piece_swimsuit, blue_one-piece_swimsuit, breasts, grin, spread_legs, wavy_hair | ### Table Version | # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_sailor_collar | blue_skirt | pleated_skirt | serafuku | simple_background | solo | white_background | black_socks | full_body | kneehighs | looking_at_viewer | grin | short_sleeves | standing | cowboy_shot | blue_neckerchief | one-hour_drawing_challenge | sidelocks | grey_footwear | medium_hair | shoes | holding | adapted_turret | machinery | rigging | torpedo_launcher | smokestack | cannon | cloud | ocean | sky | headset | standing_on_liquid | turret | dated | open_mouth | sailor_collar | upper_body | blush | 2girls | solo_focus | closed_eyes | sitting | shirt_lift | 1boy | hetero | navel | nipples | small_breasts | white_panties | military_uniform | panty_pull | skirt_pull | official_alternate_costume | white_shirt | bag | smile | suspenders | barefoot | school_swimsuit | black_one-piece_swimsuit | blue_one-piece_swimsuit | breasts | spread_legs | wavy_hair | |----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:-------------|:----------------|:-----------|:--------------------|:-------|:-------------------|:--------------|:------------|:------------|:--------------------|:-------|:----------------|:-----------|:--------------|:-------------------|:-----------------------------|:------------|:----------------|:--------------|:--------|:----------|:-----------------|:------------|:----------|:-------------------|:-------------|:---------|:--------|:--------|:------|:----------|:---------------------|:---------|:--------|:-------------|:----------------|:-------------|:--------|:---------|:-------------|:--------------|:----------|:-------------|:-------|:---------|:--------|:----------|:----------------|:----------------|:-------------------|:-------------|:-------------|:-----------------------------|:--------------|:------|:--------|:-------------|:-----------|:------------------|:---------------------------|:--------------------------|:----------|:--------------|:------------| | 0 | 6 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 20 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | | | | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 14 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | X | X | | X | | X | X | | | | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 6 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | | X | | X | X | | | | X | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 10 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | X | X | X | X | X | X | | | | X | X | | | | | | | | | | | | X | | | | X | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 7 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | X | X | X | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | X | X | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | 7 | 8 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | 8 | 9 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | | | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | | | X | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
autopromptsgtp/9944a48a8f9f02ae75e5305a06ed3ff4e5d6cc2a
--- license: afl-3.0 ---
AppleHarem/sims_azurlane
--- license: mit task_categories: - text-to-image tags: - art - not-for-all-audiences size_categories: - n<1K --- # Dataset of sims (Azur Lane) This is the dataset of sims (Azur Lane), containing 68 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI)) | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 68 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 181 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 200 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 68 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 68 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 68 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 181 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 181 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 118 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 200 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 200 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
Heerak/abstract_summary
--- license: afl-3.0 ---
Seongill/NQ_conflict_10_half
--- dataset_info: features: - name: question dtype: string - name: answers sequence: string - name: substitute dtype: string - name: num_answer dtype: int64 - name: ctxs list: - name: hasanswer dtype: bool - name: id dtype: string - name: score dtype: float64 - name: text dtype: string - name: title dtype: string - name: is_conflict dtype: bool - name: num_replace dtype: int64 splits: - name: train num_bytes: 23977166 num_examples: 3610 download_size: 13861525 dataset_size: 23977166 configs: - config_name: default data_files: - split: train path: data/train-* ---
heliosprime/twitter_dataset_1712824294
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 - name: scraped_at dtype: string - name: image_urls dtype: string splits: - name: train num_bytes: 41113 num_examples: 103 download_size: 24336 dataset_size: 41113 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "twitter_dataset_1712824294" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jlbaker361/art_bar_renn
--- dataset_info: features: - name: img dtype: image - name: style dtype: string - name: split dtype: string splits: - name: train num_bytes: 12758150.608 num_examples: 8688 download_size: 8156798 dataset_size: 12758150.608 --- # Dataset Card for "art_bar_renn" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Shuichiakai/vlk2
--- license: artistic-2.0 ---
yeombora/sample
--- license: mit ---
akshatupmanya/5datasetbyakshat
--- license: mit ---
duxprajapati/symptom-disease-dataset
--- task_categories: - text-classification language: - en ---
RIW/small_coco_test_10
--- dataset_info: features: - name: image dtype: image - name: caption dtype: string - name: url dtype: string - name: key dtype: string - name: status dtype: string - name: error_message dtype: 'null' - name: width dtype: int64 - name: height dtype: int64 - name: original_width dtype: int64 - name: original_height dtype: int64 - name: exif dtype: string - name: sha256 dtype: string - name: watermark dtype: bool splits: - name: train num_bytes: 807190652.44 num_examples: 9840 - name: validation num_bytes: 885003521.915 num_examples: 8965 download_size: 366742283 dataset_size: 1692194174.355 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
roszcz/maestro-v1-sustain
--- dataset_info: features: - name: notes struct: - name: duration sequence: float64 - name: end sequence: float64 - name: pitch sequence: int64 - name: start sequence: float64 - name: velocity sequence: int64 - name: composer dtype: string - name: title dtype: string - name: year dtype: int64 - name: midi_filename dtype: string splits: - name: test num_bytes: 29686362 num_examples: 177 - name: validation num_bytes: 25599834 num_examples: 137 - name: train num_bytes: 226534277 num_examples: 962 download_size: 87287914 dataset_size: 281820473 --- # Dataset Card for "maestro-v1-sustain" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
distilabel-internal-testing/testing-distilabel-push-to-hub-2
--- dataset_info: features: - name: instruction dtype: string - name: completion dtype: string - name: meta struct: - name: category dtype: string - name: completion dtype: string - name: id dtype: int64 - name: input dtype: string - name: motivation_app dtype: string - name: prompt dtype: string - name: source dtype: string - name: subcategory dtype: string - name: model dtype: string - name: generation sequence: string splits: - name: train num_bytes: 774205 num_examples: 327 download_size: 460466 dataset_size: 774205 configs: - config_name: default data_files: - split: train path: data/train-* ---
debugzxcv/nana7mi
--- license: unknown ---