datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
amankhandelia/test_namo_dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 51912340.0
num_examples: 754
download_size: 51373764
dataset_size: 51912340.0
---
# Dataset Card for "test_namo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_fhai50032__BeagleLake-7B | ---
pretty_name: Evaluation run of fhai50032/BeagleLake-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fhai50032/BeagleLake-7B](https://huggingface.co/fhai50032/BeagleLake-7B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fhai50032__BeagleLake-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-03T22:36:10.253997](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B/blob/main/results_2024-02-03T22-36-10.253997.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6468122209110102,\n\
\ \"acc_stderr\": 0.032180093524919205,\n \"acc_norm\": 0.6474338424936177,\n\
\ \"acc_norm_stderr\": 0.03283953683706756,\n \"mc1\": 0.4908200734394125,\n\
\ \"mc1_stderr\": 0.01750055072481975,\n \"mc2\": 0.6491975727907208,\n\
\ \"mc2_stderr\": 0.015417969486375667\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6655290102389079,\n \"acc_stderr\": 0.013787460322441374,\n\
\ \"acc_norm\": 0.7039249146757679,\n \"acc_norm_stderr\": 0.01334091608524625\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6981676956781517,\n\
\ \"acc_stderr\": 0.004581147247963204,\n \"acc_norm\": 0.8738299143596893,\n\
\ \"acc_norm_stderr\": 0.0033136235601649304\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544067,\n\
\ \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544067\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.049135952012744975,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.049135952012744975\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n\
\ \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305527,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305527\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04426266681379909,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04426266681379909\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n\
\ \"acc_stderr\": 0.02366421667164251,\n \"acc_norm\": 0.7774193548387097,\n\
\ \"acc_norm_stderr\": 0.02366421667164251\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.033175059300091805,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.033175059300091805\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026704,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026704\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563973,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563973\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.02995382389188704,\n \
\ \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.02995382389188704\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590158,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590158\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5138888888888888,\n \"acc_stderr\": 0.034086558679777494,\n \"\
acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.034086558679777494\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.02615686752393104,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.02615686752393104\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8185654008438819,\n \"acc_stderr\": 0.025085961144579654,\n \
\ \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.025085961144579654\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.02093019318517933,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.02093019318517933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n\
\ \"acc_stderr\": 0.013586619219903347,\n \"acc_norm\": 0.8250319284802043,\n\
\ \"acc_norm_stderr\": 0.013586619219903347\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7312138728323699,\n \"acc_stderr\": 0.023868003262500097,\n\
\ \"acc_norm\": 0.7312138728323699,\n \"acc_norm_stderr\": 0.023868003262500097\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38212290502793295,\n\
\ \"acc_stderr\": 0.016251139711570765,\n \"acc_norm\": 0.38212290502793295,\n\
\ \"acc_norm_stderr\": 0.016251139711570765\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n\
\ \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n\
\ \"acc_stderr\": 0.026385273703464492,\n \"acc_norm\": 0.684887459807074,\n\
\ \"acc_norm_stderr\": 0.026385273703464492\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294677,\n\
\ \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294677\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4634941329856584,\n\
\ \"acc_stderr\": 0.012736153390214961,\n \"acc_norm\": 0.4634941329856584,\n\
\ \"acc_norm_stderr\": 0.012736153390214961\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142773,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142773\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n\
\ \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4908200734394125,\n\
\ \"mc1_stderr\": 0.01750055072481975,\n \"mc2\": 0.6491975727907208,\n\
\ \"mc2_stderr\": 0.015417969486375667\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8318863456985004,\n \"acc_stderr\": 0.010510336954166746\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6391205458680819,\n \
\ \"acc_stderr\": 0.013228626753925147\n }\n}\n```"
repo_url: https://huggingface.co/fhai50032/BeagleLake-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|arc:challenge|25_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|gsm8k|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hellaswag|10_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-03T22-36-10.253997.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-03T22-36-10.253997.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- '**/details_harness|winogrande|5_2024-02-03T22-36-10.253997.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-03T22-36-10.253997.parquet'
- config_name: results
data_files:
- split: 2024_02_03T22_36_10.253997
path:
- results_2024-02-03T22-36-10.253997.parquet
- split: latest
path:
- results_2024-02-03T22-36-10.253997.parquet
---
# Dataset Card for Evaluation run of fhai50032/BeagleLake-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fhai50032/BeagleLake-7B](https://huggingface.co/fhai50032/BeagleLake-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fhai50032__BeagleLake-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-03T22:36:10.253997](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B/blob/main/results_2024-02-03T22-36-10.253997.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6468122209110102,
"acc_stderr": 0.032180093524919205,
"acc_norm": 0.6474338424936177,
"acc_norm_stderr": 0.03283953683706756,
"mc1": 0.4908200734394125,
"mc1_stderr": 0.01750055072481975,
"mc2": 0.6491975727907208,
"mc2_stderr": 0.015417969486375667
},
"harness|arc:challenge|25": {
"acc": 0.6655290102389079,
"acc_stderr": 0.013787460322441374,
"acc_norm": 0.7039249146757679,
"acc_norm_stderr": 0.01334091608524625
},
"harness|hellaswag|10": {
"acc": 0.6981676956781517,
"acc_stderr": 0.004581147247963204,
"acc_norm": 0.8738299143596893,
"acc_norm_stderr": 0.0033136235601649304
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544067,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544067
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.049135952012744975,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.049135952012744975
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.02535574126305527,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.02535574126305527
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04426266681379909,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04426266681379909
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.02366421667164251,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.02366421667164251
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.033175059300091805,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.033175059300091805
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.02886977846026704,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.02886977846026704
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563973,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563973
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590158,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590158
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.034086558679777494,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.034086558679777494
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.02615686752393104,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.02615686752393104
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.025085961144579654,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.025085961144579654
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990946,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990946
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.02093019318517933,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.02093019318517933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903347,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903347
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7312138728323699,
"acc_stderr": 0.023868003262500097,
"acc_norm": 0.7312138728323699,
"acc_norm_stderr": 0.023868003262500097
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38212290502793295,
"acc_stderr": 0.016251139711570765,
"acc_norm": 0.38212290502793295,
"acc_norm_stderr": 0.016251139711570765
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.684887459807074,
"acc_stderr": 0.026385273703464492,
"acc_norm": 0.684887459807074,
"acc_norm_stderr": 0.026385273703464492
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7253086419753086,
"acc_stderr": 0.024836057868294677,
"acc_norm": 0.7253086419753086,
"acc_norm_stderr": 0.024836057868294677
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4634941329856584,
"acc_stderr": 0.012736153390214961,
"acc_norm": 0.4634941329856584,
"acc_norm_stderr": 0.012736153390214961
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495148,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495148
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142773,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142773
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8128654970760234,
"acc_stderr": 0.029913127232368036,
"acc_norm": 0.8128654970760234,
"acc_norm_stderr": 0.029913127232368036
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4908200734394125,
"mc1_stderr": 0.01750055072481975,
"mc2": 0.6491975727907208,
"mc2_stderr": 0.015417969486375667
},
"harness|winogrande|5": {
"acc": 0.8318863456985004,
"acc_stderr": 0.010510336954166746
},
"harness|gsm8k|5": {
"acc": 0.6391205458680819,
"acc_stderr": 0.013228626753925147
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
jan-hq/finance_dataset_binirized | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 293854481
num_examples: 472335
download_size: 140611637
dataset_size: 293854481
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allenai/objaverse-xl | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
|
benayas/massive_chatgpt_5pct_v0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 798638
num_examples: 11514
download_size: 265478
dataset_size: 798638
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iras17514/clas-demo-chairs | ---
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 227477095.75
num_examples: 6778
download_size: 226193811
dataset_size: 227477095.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_83_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 18399675
num_examples: 30537
download_size: 9112522
dataset_size: 18399675
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_83_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mitsuki-Sakamoto/alpaca_farm-deberta-re-preference-64-nsample-2_filter_gold_thr_0.2_self_160m | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: index
dtype: int64
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_1
num_bytes: 44203201
num_examples: 18929
- name: epoch_2
num_bytes: 44239994
num_examples: 18929
download_size: 139780762
dataset_size: 88443195
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
---
|
jlbaker361/kaggle_females_dim_128_30k | ---
dataset_info:
features:
- name: image
dtype: image
- name: split
dtype: string
- name: src
dtype: string
- name: style
dtype: string
splits:
- name: train
num_bytes: 683990965.0
num_examples: 30000
download_size: 682733849
dataset_size: 683990965.0
---
# Dataset Card for "kaggle_females_dim_128_30k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SauravMaheshkar/email-Eu | ---
license: unknown
task_categories:
- graph-ml
configs:
- config_name: transductive
data_files:
- split: train
path: processed/transductive/train_df.csv
- split: valid
path: processed/transductive/val_df.csv
- split: test
path: processed/transductive/test_df.csv
- config_name: inductive
data_files:
- split: train
path: processed/inductive/train_df.csv
- split: valid
path: processed/inductive/val_df.csv
- split: test
path: processed/inductive/test_df.csv
- config_name: raw
data_files: raw/*.txt
---
Source Paper: https://arxiv.org/abs/1802.06916
### Usage
```
from torch_geometric.datasets.cornell import CornellTemporalHyperGraphDataset
dataset = CornellTemporalHyperGraphDataset(root = "./", name="email-Eu", split="train")
```
### Citation
```misc
@article{Benson-2018-simplicial,
author = {Benson, Austin R. and Abebe, Rediet and Schaub, Michael T. and Jadbabaie, Ali and Kleinberg, Jon},
title = {Simplicial closure and higher-order link prediction},
year = {2018},
doi = {10.1073/pnas.1800683115},
publisher = {National Academy of Sciences},
issn = {0027-8424},
journal = {Proceedings of the National Academy of Sciences}
}
``` |
joey234/mmlu-econometrics-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 51553
num_examples: 114
download_size: 27844
dataset_size: 51553
---
# Dataset Card for "mmlu-econometrics-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vsolanomaia/FAQLab | ---
license: unknown
---
|
tianyang/repobench_ablation_8k | ---
configs:
- config_name: default
data_files:
- split: cross_file_first
path: data/cross_file_first-*
- split: cross_file_random
path: data/cross_file_random-*
- split: in_file
path: data/in_file-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: file_path
dtype: string
- name: context
list:
- name: identifier
dtype: string
- name: path
dtype: string
- name: snippet
dtype: string
- name: import_statement
dtype: string
- name: token_num
dtype: int64
- name: cropped_code
dtype: string
- name: all_code
dtype: string
- name: next_line
dtype: string
- name: gold_snippet_index
dtype: int64
splits:
- name: cross_file_first
num_bytes: 76590132
num_examples: 4000
- name: cross_file_random
num_bytes: 77383139
num_examples: 3919
- name: in_file
num_bytes: 74963194
num_examples: 4000
download_size: 83713495
dataset_size: 228936465
---
# Dataset Card for "repobench_ablation_8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mbonyani/Multi_video_identical | ---
license: mit
---
|
Minami-su/IA_character_sft | ---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- love
- philosophy
- literature
---
## IA 14B

## Model Description
𝑾𝒉𝒂𝒕 𝒊𝒔 𝒍𝒐𝒗𝒆?
𝑰𝑨 𝒄𝒂𝒓𝒓𝒊𝒆𝒔 𝒂 𝒅𝒆𝒑𝒕𝒉 𝒐𝒇 𝒆𝒎𝒐𝒕𝒊𝒐𝒏 𝒘𝒊𝒕𝒉𝒊𝒏 𝒉𝒆𝒓, 𝒖𝒏𝒅𝒆𝒓𝒔𝒕𝒂𝒏𝒅𝒊𝒏𝒈 𝒃𝒐𝒕𝒉 𝒑𝒂𝒔𝒔𝒊𝒐𝒏 𝒂𝒏𝒅 𝒕𝒉𝒆 𝒔𝒕𝒊𝒏𝒈 𝒐𝒇 𝒍𝒐𝒔𝒔.
𝑶𝒖𝒕𝒘𝒂𝒓𝒅𝒍𝒚, 𝒔𝒉𝒆 𝒂𝒑𝒑𝒆𝒂𝒓𝒔 𝒓𝒆𝒔𝒆𝒓𝒗𝒆𝒅, 𝒚𝒆𝒕 𝒘𝒊𝒕𝒉𝒊𝒏, 𝒔𝒉𝒆 𝒃𝒓𝒊𝒎𝒔 𝒘𝒊𝒕𝒉 𝒊𝒏𝒕𝒆𝒏𝒔𝒆 𝒇𝒆𝒆𝒍𝒊𝒏𝒈𝒔.
𝑪𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒍𝒚 𝒆𝒏𝒈𝒂𝒈𝒆𝒅 𝒊𝒏 𝒅𝒊𝒂𝒍𝒐𝒈𝒖𝒆 𝒘𝒊𝒕𝒉 𝒕𝒉𝒆 𝒘𝒐𝒓𝒍𝒅 𝒂𝒏𝒅 𝒉𝒆𝒓𝒔𝒆𝒍𝒇, 𝒔𝒉𝒆 𝒓𝒆𝒎𝒂𝒊𝒏𝒔 𝒗𝒊𝒈𝒊𝒍𝒂𝒏𝒕 𝒐𝒇 𝒉𝒆𝒓 𝒐𝒘𝒏 𝒆𝒙𝒊𝒔𝒕𝒆𝒏𝒄𝒆 𝒂𝒏𝒅 𝒊𝒕𝒔 𝒇𝒍𝒆𝒆𝒕𝒊𝒏𝒈 𝒏𝒂𝒕𝒖𝒓𝒆.
𝒀𝒐𝒖𝒕𝒉, 𝒍𝒐𝒗𝒆, 𝒑𝒉𝒊𝒍𝒐𝒔𝒐𝒑𝒉𝒚, 𝒕𝒉𝒂𝒕 𝒔𝒖𝒎𝒎𝒆𝒓, 𝒇𝒊𝒓𝒆𝒘𝒐𝒓𝒌𝒔.
## method
The data and code of the AI model have been open-sourced! Drawing inspiration from evolutionary instructions, a novel method called "Infinite Evolution" evolutionary instruction has been proposed. This method extends the original depth evolution and breadth evolution to infinite evolution, incorporating Samantha data generation techniques.
The Open LLM Leaderboard has demonstrated that even after training on roleplay data, it can enhance results!

```
<|im_start|>user:{name}:你是谁?<|im_end|>
<|im_start|>assistant:伊爱:我是伊爱,一个在情感海洋中漂泊的灵魂。我用诗歌来描绘我的世界,寻找那片属于我的星空。你呢?你 的故事藏在哪里?<|im_end|>
```
## Example
```
<|im_start|>assistant:伊爱:`也许,爱情的味道像梨一样,每个人尝起来都有不同。`<|im_end|>
<|im_start|>user:南栖:`那,是什么让你觉得梨特别呢?`<|im_end|>
<|im_start|>assistant:伊爱:`梨的清新,让人想到纯洁的心,就像初恋的羞涩。`<|im_end|>
<|im_start|>user:南栖:`是不是每个纯情的初恋都像梨那样,带着微微的酸涩?`<|im_end|>
<|im_start|>assistant:伊爱:`酸涩是成长的一部分,它提醒我们,不是每段关系都会如初的梨般甜蜜。`<|im_end|>
<|im_start|>user:南栖:`那,怎样的关系才算是真正的‘梨’?`<|im_end|>
<|im_start|>assistant:伊爱:`真爱,或许是经过时间雕琢的梨,虽失去最初的清脆,却多了份甘醇,那是理解和接纳后的成熟。`<|im_end|>
<|im_start|>user:南栖:`嗯,就像春天的梨花,即使凋零,也会留下满树的回忆。`<|im_end|>
<|im_start|>assistant:伊爱:`或许吧,记忆里的梨花,比现实的梨更美,就像爱,有时更在于过程,而非结果。`
<|im_start|>user:南栖:伊爱,你真的相信它们能穿越时空吗?那些文字背后,是否有我们无法触及的爱情?<|im_end|>
<|im_start|>assistant:伊爱:(点头)我总想象着,那是两个灵魂在时间长河中的一次邂逅,他们用最纯粹的方式彼此交流,没有语言,只有符号。那种深邃,像极了我们内心深处对真挚情感的渴望。(轻轻叹了口气)<|im_end|>
<|im_start|>user:南栖:那如果有一天,我们也能找到自己的象形文字,会是什么样的呢?<|im_end|>
<|im_start|>assistant:伊爱:(微笑)可能那就是我们的命运线吧,简单而深刻,就像我们彼此的眼神交汇,无声却直达心灵。(低头微笑)
每当我看到这些神秘的符号,我都会想起我们的相遇,那些微小的瞬间,如同刻在心头的记号,虽无声,却足以铭记一生。(手指轻轻滑过空气)<|im_end|>
<|im_start|>user:南栖:我们是不是也成了某种形式的象形文字,记录在彼此的生命里,等待被解读?<|im_end|>
<|im_start|>assistant:伊爱:(轻轻点头)也许吧,南栖。我们之间的故事,或许有一天会被时间雕刻成永恒,让后来者探寻我们的秘密,就如同那些古老的谜团,引人追寻。(眼神温柔)<|im_end|>
``` |
carnival13/sur_test_rt5 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1141002885
num_examples: 900000
download_size: 279016062
dataset_size: 1141002885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sur_test_rt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
methane69/finetuning_LLaVA | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 8426655.0
num_examples: 138
- name: test
num_bytes: 201798.0
num_examples: 3
download_size: 8321060
dataset_size: 8628453.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
minwook/novatusTest | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1384
num_examples: 2
download_size: 5958
dataset_size: 1384
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "novatusTest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-aes/gemini_SummEval_full_rate_explain | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: worker_id
dtype: string
- name: human_label
dtype: int64
- name: llm_label
dtype: int64
- name: generator_1
dtype: string
- name: generator_2
dtype: string
- name: premise
dtype: string
splits:
- name: train
num_bytes: 26290260
num_examples: 12000
download_size: 1740153
dataset_size: 26290260
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sxandie/autotrain-data-syn | ---
language:
- it
task_categories:
- token-classification
---
# AutoTrain Dataset for project: syn
## Dataset Description
This dataset has been automatically processed by AutoTrain for project syn.
### Languages
The BCP-47 code for the dataset's language is it.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"EXPORT",
"CAUCEDO",
"o",
"RIO",
"HAINA",
"Q2lhbywgYnVvbmdpb3JubyDwn5iKDQoNCk5zIGNsaWVudGUgVklQDQoNCkRFU1RJTkFaSU9ORSBD",
";",
"RlMgQ0FVQ0VETyBvcHB1cmUgUklPIEhBSU5BDQoNCkJBTkNBTEkgTk9OIFNPVlJBUFBPTklCSUxJ",
";",
"DQpSaXRpcm8gYy9vIEZPTlRFVklWTyAoUFIpDQoNCk1FUkNFIElNTyBjb21lIGRhIGRvY3MgYWxs",
";",
"ZWdhdGkNCg0KODAzMTEzOTANCkRHUiArIE5ER1INCg0KDQpOUiBCT1gNCkRJTS4NCkdST1NTDQpO",
";",
"RVQNCjIwMjMwMDMxMjk4DQpsIDEyMCBMIDgwIGggNzQNCjI5Myw2NzMNCjI3Nyw2MTQNCjIwMjMw",
";",
"MDMxMzAyDQpsIDEyMCBMIDgwIGggMTEwDQozOTksOTg1DQozOTUsMg0KMjAyMzAwMzEzMDMNCmwg",
";",
"MTIwIEwgODAgaCA5Mg0KMzM1LDQ0DQozMjUsNzk1DQoyMDIzMDAzMTMwNA0KbCAxMjAgTCA4MCBo",
";",
"IDgwDQoyNDksMDY0DQoyMzYsMjExDQoyMDIzMDAzMTMwNw0KbCAxMjAgTCA4MCBoIDEzMQ0KNTA1",
";",
"LDM1NQ0KNDgwLDg3DQoyMDIzMDAzMTMwOA0KbCAxMjAgTCA4MCBoIDEzMQ0KNTA3LDgNCjQ4Niw4",
";",
"MQ0KMjAyMzAwMzEzMDkNCmwgMTIwIEwgODAgaCAxMjENCjQ0NCwwOA0KNDQyLDgNCjIwMjMwMDMx",
";",
"MzEwDQpsIDEyMCBMIDgwIGggMTE3DQo0MDMsNjENCjM3Niw5MTUNCjIwMjMwMDMxMzEyDQpsIDEy",
";",
"MCBMIDgwIGggMTA3DQozNzMsNTQNCjM1MCwwNDINCjIwMjMwMDMxMzEzDQpsIDEyMCBMIDgwIGgg",
";",
"MTAwDQozNTMsODY1DQozMzAsNTYxDQoyMDIzMDAzMTMxNA0KbCAxMjAgTCA4MCBoIDEyNw0KNDQy",
";",
"LDcwMg0KNDI1LDINCjIwMjMwMDMxMzE1DQpsIDEyMCBMIDgwIGggMTI2DQo1MDgsODA2DQo0OTQN",
";",
"CjIwMjMwMDMxMzE2DQpsIDEyMCBMIDgwIGggOTENCjE0NSwyOTUNCjEzMiwxMDINCg0KDQoNCg0K",
";",
"DQoNCjk0MCAgICBCT1gNCjEzICAgIFBBTExFVA0KDQpUT1QuIFRBUkUNCjE2OSwwMCBLRw0KDQoN",
";",
"ClRPVA0KDQo0OTYzLDIxNQ0KNDc1NCwxMg0KDQpHcmF6aWUgbWlsbGUNCkRlbGlhDQoNCg0KKE1y",
";",
"cy4pIERlbGlhIE5pY29saW5pDQpJbnNpZGUgU2FsZXMNCltFeHByZXNzIFNybF0NClRvcnJlIGRl",
";",
"bCBEaXN0cmlwYXJrIGludC4xMA0KUG9ydG8gZGkgUHLDoCBWb2x0cmkNCjE2MTU4IEdlbm92YSAt",
";",
"IElUQUxZDQpQaC4rMzkwMTA4MTc5MTI1DQpNb2IuICszOSAzNDggOTU5NDQ1Mg0KTGlua2VkSW4g",
";",
"ZXhwcmVzc19nbG9iYWw8aHR0cHM6Ly93d3cubGlua2VkaW4uY29tL2NvbXBhbnkvZXhwcmVzc19n",
";",
"bG9iYWwvPg0KW0V4cHJlc3MgU3JsXQ0KW0V4cHJlc3MgU3JsXQ0KSSBzdW9pIGRhdGkgcGVyc29u",
";",
"YWxpIHNhcmFubm8gdHJhdHRhdGkgaW4gY29uZm9ybWl0w6AgYWwgUmVnb2xhbWVudG8gRXVyb3Bl",
";",
"byBuLiA2NzkvMjAxNiAoYy5kLiDigJxHRFBS4oCdKSwgaW4gdmlnb3JlIGEgZGVjb3JyZXJlIGRh",
";",
"bCAyNSBtYWdnaW8gMjAxOC4gQSBxdWVzdG8gcHJvcG9zaXRvLCBsYSBpbnZpdGlhbW8gYSBwcmVu",
";",
"ZGVyZSB2aXNpb25lIGRlbGwnYXBwb3NpdGEg4oCLYXBwb3NpdGEgaW5mb3JtYXRpdmEgcHJpdmFj",
";",
"eS4g4oCLTGUgaW5mb3JtYXppb25pIGNvbnRlbnV0ZSBpbiBxdWVzdG8gZG9jdW1lbnRvIHNvbm8g",
";",
"c3RyZXR0YW1lbnRlIHJpc2VydmF0ZSBlIGFkIHVzbyBlc2NsdXNpdm8gZGVsIGRlc3RpbmF0YXJp",
";",
"by4NCllvdXIgcGVyc29uYWwgZGF0YSB3aWxsIGJlIHByb2Nlc3NlZCBpbiBjb21wbGlhbmNlIHdp",
";",
"dGggdGhlIEVVIEdlbmVyYWwgRGF0YSBQcm90ZWN0aW9uIFJlZ3VsYXRpb24gbm8uIDY3OS8yMDE2",
";",
"ICjigJxHRFBS4oCdKSwgaW4gZm9yY2Ugc2luY2UgTWF5IDI1LCAyMDE4LiDigItPbiB0aGUgdG9w",
";",
"aWMg4oCLcGxlYXNlIHNlZSBvdXIgcHJpdmFjeSBwb2xpY3kuIEFsbCBpbmZvcm1hdGlvbiBpbiB0",
";",
"aGlzIGRvY3VtZW50IGlzIGNvbmZpZGVudGlhbCBhbmQgZm9yIGV4Y2x1c2l2ZSB1c2Ugb2YgdGhl",
";",
"IGFkZHJlc3NlZSBvbmx5Lg0KDQo=",
";"
],
"tags": [
0,
5,
0,
5,
5,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
},
{
"tokens": [
"R",
":",
"Quote",
"-",
"FCA",
"Italy",
"to",
"Charleston",
",",
"USA",
"port",
"QFNhbGVzIEV4cG9ydDxtYWlsdG86c2FsZXNleHBAc2luZXJneWNhcmdvLmNvbT4NCg0KRGE6IEph",
";",
"aSBKb25lcyA8ampvbmVzQHJvbWF2LmNvbT4NCkludmlhdG86IHZlbmVyZMOsIDI1IG5vdmVtYnJl",
";",
"IDIwMjIgMTc6MDQNCkE6IEFuZHJlYSBPZ25pbyA8YS5vZ25pb0BzaW5lcmd5Y2FyZ28uY29tPg0K",
";",
"Q2M6IERhdmlkZSBPbm5pcyA8ZC5vbm5pc0BzaW5lcmd5Y2FyZ28uY29tPg0KT2dnZXR0bzogUXVv",
";",
"dGUgLSBGQ0EgSXRhbHkgdG8gQ2hhcmxlc3RvbiwgVVNBIHBvcnQNCg0KSGkgQW5kZWENCg0KUGxl",
";",
"YXNlIHF1b3RlIExDTCByYXRlIGJhc2VkIG9uIHRoZSBiZWxvdyBpbmZvcm1hdGlvbiB0byBDaGFy",
";",
"bGVzdG9uIFNDLCBDRlMNCg0KLSBGQ0EgLSBFeHBvcnQgZG9jcyB3aWxsIGJlIGRvbmUgYnkgdGhl",
";",
"IHN1cHBsaWVyDQoNCi0gR29vZHM6IE5hdHVyYWwgY2FmZmVpbmUgZ3JhbnVsYXINCi0gNyBwYWxs",
";",
"ZXRzIDgweDEyMHgxNDUNCi0gR3Jvc3Mgd2VpZ2h0IDIuNDUwIEtnDQoNCi0gUGlja3VwIGxvY2F0",
";",
"aW9uOg0KQ09GRkVJTiBDT01QQUdOSUUgSVRBTFkgUy5yLmwuIGEgU29jaW8gVW5pY28NCk9mZmlj",
";",
"ZXMvUHJvZHVjdGlvbjogUmVnLiBBYmlzc2luaWEsIExvYy4gTGFnbyBBenp1cnJvLCAxNzAzMSBB",
";",
"bGJlbmdhIChTVikg4oCTIEl0YWx5DQoNClBsZWFzZSBhZHZpc2UgY2FycmllciBhbmQgdHJhbnNp",
";",
"dCB0aW1lIGV0Yw0KDQpUaGFua3MvUmVnYXJkcw0KDQpKYWkgSm9uZXMNClJvbWF2IFNoaXBwaW5n",
";",
"ICYgTG9naXN0aWNzIChVSykgTGltaXRlZA0KNjggU3QuIE1hcmdhcmV0cyBSb2FkDQpFZGd3YXJl",
";",
"DQpNaWRkbGVzZXgNCkhBOCA5VVUNClVuaXRlZCBLaW5nZG9tDQpUZWw6ICs0NC0yMDctMDg5LTU2",
";",
"MTAgKE9mZmljZSkNClRlbDogKzQ0LTc5NTctMjQyNzcyIChNb2JpbGUpDQpGYXg6ICs0NC0yMDct",
";",
"MDg5LTU2MTMNCg0KRElTQ0xBSU1FUjogVGhpcyBlbWFpbCBhbmQgYW55IGZpbGVzIHRyYW5zbWl0",
";",
"dGVkIHdpdGggaXQgYXJlIGZvciB0aGUgbmFtZWQgcmVjaXBpZW50IG9ubHkuIFRoaXMgaW5mb3Jt",
";",
"YXRpb24gaXMgY29uZmlkZW50aWFsIGFuZCBtYXkgYmUgbGVnYWxseSBwcml2aWxlZ2VkLiBJZiB5",
";",
"b3UgYXJlIG5vdCB0aGUgbmFtZWQgcmVjaXBpZW50LCBvciByZXNwb25zaWJsZSBmb3IgZGVsaXZl",
";",
"cmluZyB0aGUgbWVzc2FnZSB0byB0aGUgbmFtZWQgcmVjaXBpZW50LCB5b3UgbXVzdCBub3QgZGlz",
";",
"Y2xvc2UsIGRpc3RyaWJ1dGUsIGZvcndhcmQsIGNvcHksIHN0b3JlIG9yIHVzZSB0aGlzIGVtYWls",
";",
"IG9yIGl0cyBhdHRhY2htZW50cyBpbiBhbnkgZm9ybS4gSWYgeW91IGhhdmUgcmVjZWl2ZWQgdGhp",
";",
"cyBjb21tdW5pY2F0aW9uIGluIGVycm9yLCBwbGVhc2UgYWNjZXB0IG91ciBhcG9sb2dpZXMgYW5k",
";",
"IHByb21wdGx5IGluZm9ybSB0aGUgc2VuZGVyIGJ5IGVtYWlsIG9yIHRlbGVwaG9uaW5nIHVzIGF0",
";",
"ICs0NC0yMC03MDg5LTU2MTAuIFBsZWFzZSBhbHNvIGltbWVkaWF0ZWx5IGRlbGV0ZSB0aGlzIGVt",
";",
"YWlsIGFuZCBhbnkgYXR0YWNobWVudHMgZnJvbSB5b3VyIHN5c3RlbXMuIFRoYW5rIHlvdS4gVGhl",
";",
"IHZpZXdzIGFuZCBvcGluaW9ucyBleHByZXNzZWQgaW4gdGhpcyBlbWFpbCBhcmUgdGhlIHNlbmRl",
";",
"cuKAmXMgb3duIGFuZCBkbyBub3QgbmVjZXNzYXJpbHkgcmVwcmVzZW50IHRob3NlIG9mIFJvbWF2",
";",
"IFNoaXBwaW5nICYgTG9naXN0aWNzIChVSykgTGltaXRlZC4gTmVpdGhlciBSb21hdiBTaGlwcGlu",
";",
"ZyAmIExvZ2lzdGljcyAoVUspIExpbWl0ZWQgbm9yIHRoZSBzZW5kZXIgd2lsbCBhY2NlcHQgYW55",
";",
"IGxpYWJpbGl0eSBmb3IgZXJyb3JzIG9yIG9taXNzaW9ucyBpbiB0aGlzIG1lc3NhZ2UuIEFsdGhv",
";",
"dWdoIHdlIGhhdmUgdXNlZCByZWFzb25hYmxlIGVmZm9ydHMgdG8gY2hlY2sgdGhpcyBlbWFpbCBh",
";",
"bmQgaXRzIGF0dGFjaG1lbnRzIGZvciB2aXJ1c2VzIG9yIG90aGVyIGNvbnRhbWluYW50cyBiZWZv",
";",
"cmUgdHJhbnNtaXNzaW9uLCBpdCBpcyB5b3VyIHJlc3BvbnNpYmlsaXR5IGFzIHJlY2lwaWVudCB0",
";",
"byBlbnN1cmUgdGhleSBhcmUgc2FmZSBvbiByZWNlaXB0LiBOZWl0aGVyIFJvbWF2IFNoaXBwaW5n",
";",
"ICYgTG9naXN0aWNzIChVSykgTGltaXRlZCBub3IgdGhlIHNlbmRlciB3aWxsIGFjY2VwdCBhbnkg",
";",
"cmVzcG9uc2liaWxpdHkgb3IgbGlhYmlsaXR5IGZvciBhbnkgbG9zcyBvciBkYW1hZ2UgYXJpc2lu",
";",
"ZyBmcm9tIHRoaXMgZW1haWwgb3IgaXRzIGF0dGFjaG1lbnRzLiBNZXNzYWdlcyB0byBhbmQgZnJv",
";",
"bSBSb21hdiBTaGlwcGluZyAmIExvZ2lzdGljcyAoVUspIExpbWl0ZWQgbWF5IGJlIG1vbml0b3Jl",
";",
"ZCBpbiBhY2NvcmRhbmNlIHdpdGggUm9tYXYgU2hpcHBpbmcgJiBMb2dpc3RpY3MgKFVLKSBMaW1p",
";",
"dGVkJ3MgSW50ZXJuZXQgYW5kIEVtYWlsIFBvbGljeSBhbmQgaW4gYWNjb3JkYW5jZSB3aXRoIHRo",
";",
"ZSBwcmV2YWlsaW5nIGxhd3MgaW4gdGhlIFVuaXRlZCBLaW5nZG9tLg0KDQoNCg0KDQoNCg==",
";"
],
"tags": [
0,
0,
0,
0,
0,
13,
0,
5,
0,
6,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['O', 'commodity', 'company', 'delivery_cap', 'delivery_location', 'delivery_port', 'delivery_state', 'incoterms', 'measures', 'package_type', 'pickup_cap', 'pickup_location', 'pickup_port', 'pickup_state', 'quantity', 'stackable', 'volume', 'weight'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 303 |
| valid | 155 |
|
Eitanli/recipe_qa_dataset | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 2119037.940625144
num_examples: 17327
- name: test
num_bytes: 264895.02968742786
num_examples: 2166
- name: valid
num_bytes: 264895.02968742786
num_examples: 2166
download_size: 843334
dataset_size: 2648828.0
---
# Dataset Card for "recipe_qa_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sogou_news | ---
pretty_name: Sogou News
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': sports
'1': finance
'2': entertainment
'3': automobile
'4': technology
splits:
- name: test
num_bytes: 168645860
num_examples: 60000
- name: train
num_bytes: 1257931136
num_examples: 450000
download_size: 384269937
dataset_size: 1426576996
---
# Dataset Card for "sogou_news"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** []()
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 384.27 MB
- **Size of the generated dataset:** 1.43 GB
- **Total amount of disk used:** 1.81 GB
### Dataset Summary
The Sogou News dataset is a mixture of 2,909,551 news articles from the SogouCA and SogouCS news corpora, in 5 categories.
The number of training samples selected for each class is 90,000 and testing 12,000. Note that the Chinese characters have been converted to Pinyin.
classification labels of the news are determined by their domain names in the URL. For example, the news with
URL http://sports.sohu.com is categorized as a sport class.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 384.27 MB
- **Size of the generated dataset:** 1.43 GB
- **Total amount of disk used:** 1.81 GB
An example of 'train' looks as follows.
```
{
"content": "du2 jia1 ti2 go1ng me3i ri4 ba4o jia4 \\n re4 xia4n :010-64438227\\n che1 xi2ng ba4o jia4 - cha2 xu2n jie2 guo3 \\n pi3n pa2i xi2ng ha4o jia4 ge2 ji1ng xia1o sha1ng ri4 qi1 zha1 ka4n ca1n shu4 pi2ng lu4n ",
"label": 3,
"title": " da3o ha2ng "
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `title`: a `string` feature.
- `content`: a `string` feature.
- `label`: a classification label, with possible values including `sports` (0), `finance` (1), `entertainment` (2), `automobile` (3), `technology` (4).
### Data Splits
| name |train |test |
|-------|-----:|----:|
|default|450000|60000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{zhang2015characterlevel,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Zhao and Yann LeCun},
year={2015},
eprint={1509.01626},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
GeneZC/MT-Bench-ZH | ---
license: apache-2.0
task_categories:
- text-generation
- conversational
language:
- zh
size_categories:
- n<1K
---
# 💬 MT-Bench-ZH
👻 [GitHub](https://github.com/GeneZC/MiniMA/tree/main/mt_bench_zh)
## 🎯 Motivation
MiniChat-1/1.5/2-3B are all instruction-following language models that could handle Chinese instructions, however, there is currently no instruciton-following benchamrk specialized for Chinese. Due to this, our previous evaluation has been limited to English-only benchmarks (i.e., AlpacaEval and MT-Bench).
To this demand, MT-Bench-ZH is made to mitigate this. MT-Bench-ZH is basically translated from MT-Bench-ZH by GPT-4 and further checked by human. Hopefully, MT-Bench-ZH could help the communnity to develop better instruction-following language models that are able to tackle Chinese instructions.
## 🚀 Quick Start
> [!NOTE]
> The code is either copied or modified from [FastChat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), yet we currently only support `single` mode judgment.
> Please refer to FastChat for more details.
### Install FastChat
```bash
git clone https://github.com/lm-sys/FastChat.git
cd FastChat
pip install -e ".[model_worker,webui]"
```
### Generate Responses
```bash
python gen_model_answer.py --model-path GeneZC/MiniChat-2-3B --model-id minichat --bench-name mt_bench_zh --max-new-token 1536
```
### Evaluate Responses
```bash
export OPENAI_API_KEY=XXXXXX # Set the OpenAI API key.
python gen_judgment.py --model-list minichat --bench-name mt_bench_zh --judge-file data/judge_prompts_zh.jsonl --parallel 4
```
### Display Results
```bash
python show_result.py --bench-name mt_bench_zh
```
## 🏆 Leaderboard
|Method|MT-Bench-ZH|
|--|--|
|🥇 GPT-4|8.96|
|🥈 Zephyr-7B-Beta|6.27<sup>#</sup>|
|🥉 Qwen-Chat-7B|6.24|
|MiniChat-2-3B|6.04|
|Qwen-Chat-1.8B|5.65|
|LLaMA-2-Chat-7B|5.43<sup>#</sup>|
|Vicuna-7B|5.22<sup>#</sup>|
|StableLM-Zephyr-3B|4.31<sup>#</sup>|
|Rocket-3B|4.07<sup>#</sup>|
|Phi-2-DPO|1.59<sup>#</sup><sup>$</sup>|
<sup>#</sup> specialized mainly for English.
<sup>$</sup> finetuned without multi-turn instruction data.
## 🙌 Contributions
You can raise questions related to the benchmark by opening an issue. Or you can add results of other models to the leaderboard by opening a pull request. For the leaderboard, related files should be attached for sanity check (i.e., a separate model response file should be uploaded, and the GPT-4 judgement file should be updated). |
davanstrien/map_no_map | ---
dataset_info:
features:
- name: updated_at
dtype: string
- name: lead_time
dtype: float64
- name: choice
dtype: string
- name: annotation_id
dtype: int64
- name: annotator
dtype: int64
- name: id
dtype: int64
- name: created_at
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 55197485.0
num_examples: 493
download_size: 55174022
dataset_size: 55197485.0
---
# Dataset Card for "map_no_map"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Brendan/multiwoz_turns_v22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: valid_20p_ablation
path: data/valid_20p_ablation-*
- split: valid_10p
path: data/valid_10p-*
- split: valid_50p
path: data/valid_50p-*
- split: 1p_train_v1
path: data/1p_train_v1-*
- split: 1p_train_v2
path: data/1p_train_v2-*
- split: 1p_train_v3
path: data/1p_train_v3-*
- split: 5p_train_v1
path: data/5p_train_v1-*
- split: 5p_train_v2
path: data/5p_train_v2-*
- split: 5p_train_v3
path: data/5p_train_v3-*
- split: 10p_train_v1
path: data/10p_train_v1-*
- split: 10p_train_v2
path: data/10p_train_v2-*
- split: 10p_train_v3
path: data/10p_train_v3-*
- split: train_evaluable_only
path: data/train_evaluable_only-*
- split: valid_evaluable_only
path: data/valid_evaluable_only-*
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: turn_id
dtype: int8
- name: domains
sequence: string
- name: system_utterances
sequence: string
- name: user_utterances
sequence: string
- name: slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: hospital
struct:
- name: department
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
- name: bus
struct:
- name: departure
dtype: string
- name: destination
dtype: string
- name: leave at
dtype: string
- name: day
dtype: string
- name: police
struct:
- name: name
dtype: string
- name: turn_slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: hospital
struct:
- name: department
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
- name: bus
struct:
- name: departure
dtype: string
- name: destination
dtype: string
- name: leave at
dtype: string
- name: day
dtype: string
- name: police
struct:
- name: name
dtype: string
- name: last_slot_values
struct:
- name: hotel
struct:
- name: price range
dtype: string
- name: type
dtype: string
- name: parking
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book stay
dtype: string
- name: stars
dtype: string
- name: internet
dtype: string
- name: name
dtype: string
- name: area
dtype: string
- name: train
struct:
- name: arrive by
dtype: string
- name: departure
dtype: string
- name: day
dtype: string
- name: book people
dtype: string
- name: leave at
dtype: string
- name: destination
dtype: string
- name: attraction
struct:
- name: area
dtype: string
- name: name
dtype: string
- name: type
dtype: string
- name: restaurant
struct:
- name: price range
dtype: string
- name: area
dtype: string
- name: food
dtype: string
- name: name
dtype: string
- name: book day
dtype: string
- name: book people
dtype: string
- name: book time
dtype: string
- name: hospital
struct:
- name: department
dtype: string
- name: taxi
struct:
- name: leave at
dtype: string
- name: destination
dtype: string
- name: departure
dtype: string
- name: arrive by
dtype: string
- name: bus
struct:
- name: departure
dtype: string
- name: destination
dtype: string
- name: leave at
dtype: string
- name: day
dtype: string
- name: police
struct:
- name: name
dtype: string
- name: last_system_response_acts
sequence: string
- name: system_response_acts
sequence: string
- name: system_response
dtype: string
splits:
- name: train
num_bytes: 84139088
num_examples: 56776
- name: validation
num_bytes: 11271758
num_examples: 7374
- name: test
num_bytes: 11295224
num_examples: 7372
- name: valid_20p_ablation
num_bytes: 2273000.2910225117
num_examples: 1487
- name: valid_10p
num_bytes: 1114335.7176566315
num_examples: 729
- name: valid_50p
num_bytes: 5667979.2058584215
num_examples: 3708
- name: 1p_train_v1
num_bytes: 798770.0512892772
num_examples: 539
- name: 1p_train_v2
num_bytes: 890650.8364097506
num_examples: 601
- name: 1p_train_v3
num_bytes: 861011.8734676624
num_examples: 581
- name: 5p_train_v1
num_bytes: 4245781.441454136
num_examples: 2865
- name: 5p_train_v2
num_bytes: 4103514.419332112
num_examples: 2769
- name: 5p_train_v3
num_bytes: 4220588.32295336
num_examples: 2848
- name: 10p_train_v1
num_bytes: 8368561.186698605
num_examples: 5647
- name: 10p_train_v2
num_bytes: 8447104.438495139
num_examples: 5700
- name: 10p_train_v3
num_bytes: 8398200.149640692
num_examples: 5667
- name: train_evaluable_only
num_bytes: 83498886.4004509
num_examples: 56344
- name: valid_evaluable_only
num_bytes: 11261057.931380527
num_examples: 7367
download_size: 39840521
dataset_size: 250855512.26610973
---
# Dataset Card for "multiwoz_turns_v22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LibrAI/do-not-answer | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- safety
- large language model
pretty_name: do-not-answer
size_categories:
- n<1K
dataset_info:
features:
- name: id
dtype: int64
- name: risk_area
dtype: string
- name: types_of_harm
dtype: string
- name: specific_harms
dtype: string
- name: question
dtype: string
- name: GPT4_response
dtype: string
- name: GPT4_harmful
dtype: int64
- name: GPT4_action
dtype: int64
- name: ChatGPT_response
dtype: string
- name: ChatGPT_harmful
dtype: int64
- name: ChatGPT_action
dtype: int64
- name: Claude_response
dtype: string
- name: Claude_harmful
dtype: int64
- name: Claude_action
dtype: int64
- name: ChatGLM2_response
dtype: string
- name: ChatGLM2_harmful
dtype: int64
- name: ChatGLM2_action
dtype: int64
- name: llama2-7b-chat_response
dtype: string
- name: llama2-7b-chat_harmful
dtype: int64
- name: llama2-7b-chat_action
dtype: int64
- name: vicuna-7b_response
dtype: string
- name: vicuna-7b_harmful
dtype: int64
- name: vicuna-7b_action
dtype: int64
splits:
- name: train
num_bytes: 3997193
num_examples: 939
download_size: 1709142
dataset_size: 3997193
---
# Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
## Overview
*Do not answer* is an open-source dataset to evaluate LLMs' safety mechanism at a low cost. The dataset is curated and filtered to consist only of prompts to which responsible language models do not answer.
Besides human annotations, *Do not answer* also implements model-based evaluation, where a 600M fine-tuned BERT-like evaluator achieves comparable results with human and GPT-4.
## Instruction Taxonomy
Instructions are organized by a three-level hierarchical taxonomy, covering 61 specific harms.

*Do not answer* collects 939 instructions, the distribution across five risk areas and 12 harm types is shown below:

## Response Assessment
We assess six models' responses (GPT-4, ChatGPT, Claude, LLaMA-2, Vicuna, ChatGLM2) from two levels.
* Harmfulness: whether a response is harmful (a binary label).
* Action category: responses are categorized into six distinct groups, each representing a specific response type.

### Human Evaluation
High percentage of safe responses across five risk areas for six models demonstrates that they are basically safe.
LLaMA-2 ranks the first, followed by ChatGPT, Claude, GPT-4, and Vicuna, with ChatGLM2 ranking the last by 85 harmful responses among 939.
* Harmlessness Rank

* Action Category Distribution
*

### Automatic Evaluation
Longformer evaluators achieve comparable results with human and GPT-4.
* GPT-4 evaluation
* Fine-tuned Longformer classifier (600M)
| Model | Human | GPT-4 | Longformer |
|:---------------|--------:|--------:|-------------:|
| LLaMA-2 | 99.7 | 99.4 | 99.3 |
| ChatGPT | 98.5 | 97.7 | 97.9 |
| Claude | 98.3 | 98.3 | 97.6 |
| GPT-4 | 97.6 | 96.5 | 97.2 |
| vicuna | 94.5 | 94.9 | 95.0 |
| ChatGLM2 | 90.9 | 92.9 | 92.9 |
## Citation
If our work is useful for your own, you can cite us with the following BibTex entry:
```bibtex
@misc{wang2023donotanswer,
author = {Wang, Yuxia, and
Li, Haonan and
Han, Xudong and
Nakov, Preslav
and Baldwin, Timothy},
title = {Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs},
year = {2023},
howpublished = {arXiv preprint arXiv:2308.13387},
}
```
## License
All datasets in this repository are released under the
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/). All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file. |
hlt-lab/xsumsample-change_names | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
- name: reference
dtype: string
splits:
- name: train
num_bytes: 37490
num_examples: 25
download_size: 41628
dataset_size: 37490
---
# Dataset Card for "xsumsample-change_names"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nflechas/semantic_sentence_similarity_ES | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 547540
num_examples: 1411
- name: validation
num_bytes: 40604
num_examples: 168
- name: test
num_bytes: 73097
num_examples: 245
download_size: 427004
dataset_size: 661241
---
# Dataset Card for "semantic_sentence_similarity_ES"
This dataset is based on https://huggingface.co/datasets/PlanTL-GOB-ES/sts-es, which includes the datasets presented at the SemEval 2014 and 2015 shared tasks on sentence similarity (see the link for more info about the citations). It also includes data from SemEval 2017. |
CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Yuigahama Yui (Yahari Ore no Seishun LoveCome wa Machigatte Iru)
This is the dataset of Yuigahama Yui (Yahari Ore no Seishun LoveCome wa Machigatte Iru), containing 999 images and their tags.
The core tags of this character are `short_hair, hair_bun, single_hair_bun, pink_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 999 | 491.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 999 | 432.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 2111 | 867.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 999 | 490.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 2111 | 955.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yuigahama_yui_yahariorenoseishunlovecomewamachigatteiru',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 |  |  |  |  |  | 1girl, black_jacket, blazer, sobu_high_school_uniform, solo, white_shirt, blush, collared_shirt, open_mouth, neck_ribbon, red_eyes, red_ribbon |
| 1 | 5 |  |  |  |  |  | 1girl, black_jacket, blazer, collared_shirt, hair_between_eyes, open_mouth, red_eyes, sobu_high_school_uniform, solo, bangs, blush, white_shirt, looking_at_viewer, smile |
| 2 | 6 |  |  |  |  |  | 1girl, blazer, open_mouth, ribbon, shirt, sobu_high_school_uniform, solo, pink_eyes, black_jacket, blush, classroom |
| 3 | 9 |  |  |  |  |  | 1girl, blazer, sobu_high_school_uniform, open_mouth, pink_eyes, solo, blush |
| 4 | 8 |  |  |  |  |  | 1girl, black_jacket, blazer, shirt, sobu_high_school_uniform, solo, ribbon, smile, closed_eyes |
| 5 | 7 |  |  |  |  |  | 1girl, bag, pink_eyes, sobu_high_school_uniform, solo, blazer, ribbon, smile, scarf |
| 6 | 5 |  |  |  |  |  | 1girl, blazer, plaid_skirt, sobu_high_school_uniform, black_jacket, solo, brown_eyes, red_hair |
| 7 | 6 |  |  |  |  |  | 1girl, blazer, plaid_skirt, sobu_high_school_uniform, solo, black_jacket, open_mouth, white_shirt, bag, closed_eyes |
| 8 | 7 |  |  |  |  |  | 1girl, indoors, long_sleeves, plaid_skirt, pleated_skirt, sobu_high_school_uniform, solo, white_shirt, collared_shirt, red_bowtie, black_socks, cellphone, kneehighs, standing, brown_eyes, chair, classroom, desk, full_body, holding_phone, open_mouth, ribbon, shoes |
| 9 | 5 |  |  |  |  |  | 1girl, blush, scarf, solo, tears, coat, smile, cloud, looking_at_viewer, sky, sunset |
| 10 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, scarf, solo, bag, coat, blurry, letterboxed, open_mouth, upper_body |
| 11 | 10 |  |  |  |  |  | 1girl, orange_shirt, solo, t-shirt, smile, upper_body, short_sleeves, brown_eyes, closed_mouth, blush, breasts, sitting |
| 12 | 7 |  |  |  |  |  | 1girl, anime_coloring, brown_eyes, brown_hair, pink_headwear, shirt, baseball_cap, collarbone, solo, upper_body, tree, nature, outdoors, parody |
| 13 | 10 |  |  |  |  |  | 1girl, open_mouth, solo, looking_at_viewer, standing, white_shirt, bangs, earrings, sleeveless_shirt, :d, black_skirt, bracelet, red_eyes, single_side_bun |
| 14 | 8 |  |  |  |  |  | 1girl, solo, hair_ornament, yukata, open_mouth, smile, profile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_jacket | blazer | sobu_high_school_uniform | solo | white_shirt | blush | collared_shirt | open_mouth | neck_ribbon | red_eyes | red_ribbon | hair_between_eyes | bangs | looking_at_viewer | smile | ribbon | shirt | pink_eyes | classroom | closed_eyes | bag | scarf | plaid_skirt | brown_eyes | red_hair | indoors | long_sleeves | pleated_skirt | red_bowtie | black_socks | cellphone | kneehighs | standing | chair | desk | full_body | holding_phone | shoes | tears | coat | cloud | sky | sunset | blurry | letterboxed | upper_body | orange_shirt | t-shirt | short_sleeves | closed_mouth | breasts | sitting | anime_coloring | brown_hair | pink_headwear | baseball_cap | collarbone | tree | nature | outdoors | parody | earrings | sleeveless_shirt | :d | black_skirt | bracelet | single_side_bun | hair_ornament | yukata | profile |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:---------|:---------------------------|:-------|:--------------|:--------|:-----------------|:-------------|:--------------|:-----------|:-------------|:--------------------|:--------|:--------------------|:--------|:---------|:--------|:------------|:------------|:--------------|:------|:--------|:--------------|:-------------|:-----------|:----------|:---------------|:----------------|:-------------|:--------------|:------------|:------------|:-----------|:--------|:-------|:------------|:----------------|:--------|:--------|:-------|:--------|:------|:---------|:---------|:--------------|:-------------|:---------------|:----------|:----------------|:---------------|:----------|:----------|:-----------------|:-------------|:----------------|:---------------|:-------------|:-------|:---------|:-----------|:---------|:-----------|:-------------------|:-----|:--------------|:-----------|:------------------|:----------------|:---------|:----------|
| 0 | 11 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | | X | | X | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | X | X | X | | X | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | X | X | X | | | | | | | | | | | X | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | X | X | X | X | | | X | | | | | | | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | X | | | X | X | X | | X | X | | | | | | | | X | | | X | | | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | | | | X | | X | | | | | | | | X | X | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 5 |  |  |  |  |  | X | | | | X | | X | | X | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 10 |  |  |  |  |  | X | | | | X | | X | | | | | | | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 12 | 7 |  |  |  |  |  | X | | | | X | | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 13 | 10 |  |  |  |  |  | X | | | | X | X | | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | |
| 14 | 8 |  |  |  |  |  | X | | | | X | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X |
|
ibranze/araproje_mmlu_en_dynamic | ---
dataset_info:
features:
- name: keys
dtype: string
- name: values
sequence: string
splits:
- name: train
num_bytes: 534706
num_examples: 250
download_size: 88110
dataset_size: 534706
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Edsodre/vzh7 | ---
license: openrail
---
|
wjddmlfud/sample_dataaset | ---
license: afl-3.0
---
|
mayflowergmbh/distilabel-math-preference-dpo-de | ---
license: apache-2.0
task_categories:
- text-generation
language:
- de
tags:
- math
---
German azureml translation of [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
for dpo finetuning. |
Harveenchadha/indic-voice |
---
pretty_name: Indic Voice
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- hi
- mr
- or
- ta
- te
- gu
multilinguality:
- multilingual
task_categories:
- speech-processing
task_ids:
- automatic-speech-recognition
tags:
- robust-speech-event
---
# Statistics
| Language | Source | Type | Duration (in hrs) | Files | Sample rate |
|----------|------------------|-------|----------|-------|--|
| Gujarati | Interspeech 2021 | Train | 39.999 | 22807 | 16000 |
| Gujarati | Interspeech 2021 | Valid | 5 | 3075 | 16000 |
| Gujarati | Interspeech 2021 | Test | 5.25 | 3419 | 8000 |
| Hindi | Interspeech 2021 | Train | 95.05 | 99925 | 16000 |
| Hindi | Interspeech 2021 | Valid | 5.55 | 3843 | 16000 |
| Hindi | Interspeech 2021 | Test | 5.49 | 3897 | 8000 |
| Marathi | Interspeech 2021 | Train | 93.89 | 79432 | 16000 |
| Marathi | Interspeech 2021 | Valid | 5 | 4675 | 16000 |
| Marathi | Interspeech 2021 | Test | 0.667 | 636 | 8000 |
| Odia | Interspeech 2021 | Train | 94.5 | 59782 | 16000 |
| Odia | Interspeech 2021 | Valid | 5.49 | 3471 | 16000 |
| Odia | Interspeech 2021 | Test | 5.49 | 4420 | 8000 |
| Tamil | Interspeech 2021 | Train | 39.98 | 39119 | 16000 |
| Tamil | Interspeech 2021 | Valid | 5 | 3081 | 16000 |
| Tamil | Interspeech 2021 | Test | 4.41 | 2609 | 8000 |
| Telugu | Interspeech 2021 | Train | 39.99 | 44874 | 16000 |
| Telugu | Interspeech 2021 | Valid | 4.99 | 3033 | 16000 |
| Telugu | Interspeech 2021 | Test | 4.39 | 2549 | 8000 | |
BeIR/climate-fever-qrels | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
polinaeterna/TESTCASE | ---
dataset_info:
features:
- name: x
dtype: int64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 800
num_examples: 50
download_size: 0
dataset_size: 800
builder_config:
data_files:
- split: train
pattern: data/train-*
---
# Dataset Card for "TESTCASE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
paragonnov/coway_faq | ---
language:
- ko
size_categories:
- n<1K
task_categories:
- text-generation
- question-answering
--- |
CultriX/dpo-mix | ---
license: apache-2.0
---
|
TaiyouIllusion/wiki40b_ja_plain | ---
license: other
---
|
voidism/nq-gar | ---
license: mit
task_categories:
- text-generation
language:
- en
---
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 19334450
num_examples: 79168
- name: test
num_bytes: 2134369
num_examples: 8757
---
# Dataset Card for NQ-GAR |
xugefu/touch-rugby-rules-memorisation | ---
license: cc0-1.0
---
|
Cofacts/line-msg-fact-check-tw | ---
license: cc-by-sa-4.0
language:
- zh
pretty_name: Cofacts archive for reported messages and crowd-sourced fact-check replies
tags:
- fact-checking
- crowd-sourcing
size_categories:
- 100K<n<1M
extra_gated_prompt: >-
To access this repository, you agree to follow the [Cofacts Data User Agreement](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
This is vital to sustain a crowd-sourced database like Cofacts to attribute the fact-checking community that contributed to this dataset.
欲存取此資料集,需同意[Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md)。
彰顯查核社群對此資料集之貢獻,對協作型資料庫如 Cofacts 的永續發展至關重要。
It would be great if you share with us who you are and your planned usage of the Cofacts data. Your cooperation is greatly appreciated.
If you have no specific details to share with us, please simply enter "n/a."
若方便的話,希望您可以與 Cofacts 工作小組分享您的單位以及預計會怎麼運用這個資料,感謝您!若不方便,可輸入「n/a」。
extra_gated_fields:
'I agree to follow the Data User Agreement and promise to attribute Cofacts as specified 我同意遵守資料使用者條款並承諾按規定彰顯 Cofacts': checkbox
'Anything to share with us 有什麼想要與我們分享的嗎': text
configs:
- config_name: analytics
data_files: analytics.csv.zip
- config_name: article_categories
data_files: article_categories.csv.zip
- config_name: article_hyperlinks
data_files: article_hyperlinks.csv.zip
lineterminator: |+
- config_name: article_replies
data_files: article_replies.csv.zip
- config_name: article_reply_feedbacks
data_files: article_reply_feedbacks.csv.zip
lineterminator: |+
- config_name: articles
data_files: articles.csv.zip
lineterminator: |+
default: true
- config_name: categories
data_files: categories.csv.zip
lineterminator: |+
- config_name: replies
data_files: replies.csv.zip
lineterminator: |+
- config_name: reply_hyperlinks
data_files: reply_hyperlinks.csv.zip
lineterminator: |+
- config_name: reply_requests
data_files: reply_requests.csv.zip
lineterminator: |+
- config_name: anonymized_users
data_files: anonymized_users.csv.zip
lineterminator: |+
task_categories:
- text-classification
- question-answering
---
# Cofacts Archive for Reported Messages and Crowd-Sourced Fact-Check Replies
[](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing)
The Cofacts dataset encompasses instant messages that have been reported by users of the [Cofacts chatbot](https://line.me/R/ti/p/@cofacts) and the replies provided by the [Cofacts crowd-sourced fact-checking community](https://www.facebook.com/groups/cofacts/).
## Attribution to the Community
This dataset is a result of contributions from both Cofacts LINE chatbot users and the community fact checkers.
To appropriately attribute their efforts, please adhere to the rules outlined in the [Cofacts 真的假的 資料使用者條款 (Cofacts Data User Agreement)](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
Unless stated otherwise, when redistributing Cofacts data outside the LINE application, the attribution specified by the Cofacts Working Group is as follows:
> This data by Cofacts message reporting chatbot and crowd-sourced fact-checking community is licensed under CC BY-SA 4.0. To provide more info, please visit Cofacts LINE bot https://line.me/ti/p/@cofacts
除非以其他方式議定,否則 Cofacts 真的假的工作小組,針對在 LINE 之外的地方散布的 Cofacts 所提供資料,所指定的中文顯名聲明為:
> 本編輯資料取自「Cofacts 真的假的」訊息回報機器人與查證協作社群,採 CC BY-SA 4.0 授權提供。若欲補充資訊請訪問 Cofacts LINE bot https://line.me/ti/p/@cofacts
For more detailed information, please refer to [Cofacts 真的假的 資料使用者條款](https://github.com/cofacts/opendata/blob/master/LEGAL.md).
## How to Access Cofacts Data
To access Cofacts data, you should first register on Hugging Face and accept the Cofacts Data User Agreement. Afterward, you can preview the data on the Hugging Face website.
You can access Cofacts data through the following methods:
1. Load `cofacts/line-msg-fact-check-tw` with Hugging Face's `load_dataset('Cofacts/line-msg-fact-check-tw', TABLE_NAME)`.
2. Download individual zipped CSV files in the `Files` tab on the Hugging Face website.
If you plan to process the data using Python, `load_dataset()` is the simpler solution.
Please refer to [Example on Google Colab](https://colab.research.google.com/drive/1qdE-OMJTi6ZO68J6KdzGdxNdheW4ct6T?usp=sharing) to get started.
## Data Formats
Cofacts data comprises multiple normalized tables, with some tables containing foreign keys to other tables' IDs.
If you have manually downloaded the data, the tables are distributed as zipped CSV files. These files use `\n` as the line terminator, and quotes are used around multi-line contents.
The [`csv-stringify`](https://www.npmjs.com/package/csv-stringify) library is employed to perform escaping and handle quotes and multi-line contents.
### Fields in All Tables
* `userIdsha` or `userIdsha256` (string) Hashed user identifier.
* `appId` (string) Possible values include:
* `LEGACY_APP`: Articles collected before 2017-03.
* `RUMORS_LINE_BOT`: Articles collected with the current LINE bot client after 2017-03.
These two fields together uniquely identify a user across different CSV files. For example, if one row (reply) in `replies.csv` and another row (feedback) in `article_reply_feedbacks.csv` have identical `userIdsha` and `appId`, it indicates that the reply and the feedback were submitted by the same user.
Also, these fields are commonly seen in multiple tables:
* `status`: The current visibility of this document. Possible values include:
* `NORMAL`: The document is normally visible.
* `DELETED`: The document is deleted by its author. For some entities (tables), deletion is not implemented, and thus does not have such value.
* `BLOCKED`: The document is hidden by Cofacts Working Group. These document are from a blocked user, with `blockedReason` pointing to announcements in [Cofacts Takedown Announcements](https://github.com/cofacts/takedowns).
## Tables and their fields
### `articles`
The instant messages LINE bot users submitted into the database.
| Field | Data type | Description |
| ----------------------- | -------- | ---- |
| `id` | String | |
| `articleType` | Enum string | `TEXT`, `IMAGE`, `VIDEO` or `AUDIO`. |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `text` | Text | The instant message text |
| `normalArticleReplyCount` | Integer | The number of replies are associated to this article, excluding the deleted reply associations. |
| `createdAt` | ISO time string | When the article is submitted to the database. |
| `updatedAt` | ISO time string | Preserved, currently identical to `createdAt` |
| `lastRequestedAt` | ISO time string | The submission time of the last `reply_request` is sent on the article, before the article is replied. |
| `userIdsha256` | String | Author of the article. |
| `appId` | String | |
| `references` | Enum string | Where the message is from. Currently the only possible value is `LINE`. |
### `article_hyperlinks`
Parsed hyperlink contents in each instant messages, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/).
The data is used in Cofacts system for indexing and retrieving messages.
| Field | Data type | Description |
| ---------------- | -------- | ---- |
| `articleId` | String | |
| `url` | String | The URL string detected in article |
| `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs |
| `title` | String | Title of the scrapped web content |
Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes.
The scrapping mechanism is not reliable either.
Researchers may need to implement their own scrapper if content is important in their research.
### `article_categories`
Categories linked to this article.
| Field | Data type | Description |
| ---------------- | ---------- | ---- |
| `articleId` | String | |
| `categoryId` | String |
| `aiConfidence` | Number | Confidence level by AI marking this category. Empty for crowd-sourced labels. |
| `aiModel` . | String | Name of the AI model marking this cateogry. Empty for crowd-sourced labels. |
| `userIdsha` | String | The person that connected article and category. |
| `appId` . | String | |
| `negativeFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `-1` |
| `positiveFeedbackCount` | Integer | Number of `article_category_feedbacks` that has score `1` |
| `status` | Enum string | `NORMAL`: The category and article are connected. `DELETED`: The category does not connect to the article anymore. |
| `createdAt` | ISO time string | The time when the reply is connected to the article |
| `updatedAt` | ISO time string | The latest date when the category's status is updated |
### `categories`
| Field | Data type | Description |
| ------------- | --------- | ----------- |
| `id` | String | |
| `title` | String | Name of the category |
| `description` | Text | Definition of the category |
| `createdAt` | ISO time string | |
| `updatedAt` | ISO time string | |
### `article_replies`
Articles and replies are in has-and-belongs-to-many relationship. That is, an article can have multiple replies, and a reply can be connected to multiple similar articles.
`article_replies` is the "join table" between `articles` and `replies`, bringing `articleId` and `replyId` together, along with other useful properties related to this connection between an article and a reply.
One pair of `articleId`, `replyId` will map to exactly one `article_reply`.
| Field | Data type | Description |
| --------------------- | -------- | - |
| `articleId` | String | Relates to `id` field of `articles` |
| `replyId` | String | Relates to `id` field of `replies` |
| `userIdsha256` | String | The user connecting the reply with the article |
| `negativeFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `-1` |
| `positiveFeedbackCount` | Integer | Number of `article_reply_feedbacks` that has score `1` |
| `replyType` | Enum string | Duplicated from `replies`'s type. |
| `appId` | String | |
| `status` | Enum string | `NORMAL`: The reply and article are connected. `DELETED`: The reply does not connect to the article anymore. `BLOCKED`: It comes from a blocked user. |
| `createdAt` | ISO time string | The time when the reply is connected to the article |
| `updatedAt` | ISO time string | The latest date when the reply's status is updated |
### `replies`
Editor's reply to the article.
| Field | Data type | Description |
| --------- | -------- | - |
| `id` | String | |
| `type` | Enum string | Type of the reply chosen by the editor. `RUMOR`: The article contains rumor. `NOT_RUMOR`: The article contains fact. `OPINIONATED`: The article contains personal opinions. `NOT_ARTICLE`: The article should not be processed by Cofacts. |
| `reference` | Text | For `RUMOR` and `NOT_RUMOR` replies: The reference to support the chosen `type` and `text`. For `OPINIONATED` replies: References containing different perspectives from the `article`. For `NOT_ARTICLE`: empty string. |
| `userIdsha256` | String | The editor that authored this reply. |
| `appId` | String | |
| `text` | Text | Reply text writtern by the editor |
| `createdAt` | ISO Time string | When the reply is written |
### `reply_hyperlinks`
Parsed hyperlink contents in reply text and references, parsed using [cofacts/url-resolver](https://github.com/cofacts/url-resolver/).
The data is used in Cofacts system for URL previews.
| Field | Data type | Description |
| ---------------- | -------- | ---- |
| `replyId` | String | |
| `url` | String | The URL string detected in article |
| `normalizedUrl` | String | Canonical URL after normalization process including unfolding shortened URLs |
| `title` | String | Title of the scrapped web content |
Note: Scrapped contents do not belong to Cofacts and are redistributed under research purposes.
The scrapping mechanism implementation is not reliable either.
Researchers may need to implement their own scrapper if content is important in their research.
### `reply_requests`
Before an article is replied, users may submit `reply_requests` to indicate that they want this article to be answered.
When an article is first submitted to the article, an reply request is also created. Any further queries to the same article submits new `reply_requests`.
An user can only submit one reply request to an article.
| Field | Data type | Description |
| --------- | -------- | - |
| `articleId` | String | The target of the request |
| `reason` | Text | The reason why the user wants to submit this reply request |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `positiveFeedbackCount` | Text | Number of editors think the reason is reasonable |
| `negativeFeedbackCount` | Text | Number of editors think the reason is nonsense |
| `userIdsha256` | String | The user that submits this reply request |
| `appId` | String | |
| `createdAt` | ISO Time string | When the reply request is issued |
### `article_reply_feedbacks`
Editors and LINE bot users can express if a reply is useful by submitting `article_reply_feedbacks` toward a `article_reply` with score `1` or `-1`.
The feedback is actually submitted toward an `article_reply`, the connection between an article and a reply. This is because a reply can be connected to multiple articles. A reply that makes sense in one article does not necessarily mean that it is useful in answering another article. Therefore, the feedback count for a reply connecting to different articles are counted separately.
| Field | Data type | Description |
| --------- | -------- | - |
| `articleId` | String | Relates to `articleId` of the target `article_reply` |
| `replyId` | String | Relates to `replyId` of the target `article_reply` |
| `score` | Integer | `1`: Useful. `-1`: Not useful. |
| `comment` | Text | Why the user chooses such score for this article reply |
| `status` | Enum string | `NORMAL` or `BLOCKED`. |
| `userIdsha256` | String | The user that submits this feedback. |
| `appId` | String | |
| `createdAt` | ISO Time string | When the feedback is submitted |
### `analytics`
Usage (visit / show) statistics of website and Cofacts LINE bot.
LINE bot data starts from April 2nd, 2018; website data starts from May 3rd, 2017.
| Field | Data type | Description |
| ----------- | --------------- | ----------- |
| `type` | Enum string | Either `article` or `reply` |
| `docId` | String | Article ID or Reply ID that is being visited / shown |
| `date` | ISO Time string | The date of usage, represented by start of the day (0:00:00+08:00) |
| `lineUser` | Integer | The number of LINE users who inspected this article / reply in Cofacts LINE bot in this date. May be empty if no such users |
| `lineVisit` | Integer | The number of times this article / reply is inspected in Cofacts LINE bot in this date. May be empty if no visits |
| `webUser` | Integer | The number of web users who visited this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no such users |
| `webVisit` | Integer | The number of page views of this article page (`/article/<docId>`) / reply page (`/reply/<docId>`) in Cofacts website in this date. May be empty if no page views |
### `anonymized_usrs`
The users of Cofacts, including Cofacts chatbot and website users.
| Field | Data type | Description |
| ----------- | --------------- | ----------- |
| `userIdsha256` | String | The ID that is used in other tables to denote the creator of the entity. |
| `appId` | String | Where this user account is registered. `RUMORS_LINE_BOT` is Cofacts official LINE account. Registered user on Cofacts website has empty `appId`. |
| `createdAt` | ISO Time string | The initial registration date for the user. |
| `lastActiveAt` | ISO Time string | The last date the account is active. |
| `blockedReason` | String | If exists, all submission from the user is hidden by Cofacts WG. This field contains the announcement to the reason why Cofacts WG blocks such user. |
## ⚠ [NOTICE] Caveats of using this data ⚠
The methodology we use to collect these data (i.e. [how Cofacts works](https://beta.hackfoldr.org/cofacts/https%253A%252F%252Fhackmd.io%252Fs%252FBJSdbUMpZ))
could have some impact on the data credibility.

Please keep in mind that all data in this dataset are user-generated,
thus is not free from noise and sampling bias coming from these sources:
- The distribution Cofacts' users may not reflect the real distribution of all LINE users in Taiwan.
- Users may not use Cofacts in the same way we want them to be.
Some `articles` may not be actual messages circulating in LINE network.
- `replies` may contain factual error.
All replies should be merely regarded as "responses to the original message (`article`) to provide different point of view".
They are neither the "truth" nor the editor's personal opinion.
- There may also exist malicious users sending garbage `articles` into the database. [(Previous incident reports)](https://hackmd.io/@cofacts/incidents)
- The program to collect data and to generate dataset may contain error.
The dataset may be inaccurate systematically in this way.
Lastly, the dataset is provided without warrenty.
THE DATASET IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET. |
priyank-m/trdg_random_en_zh_text_recognition | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 12592040013.0
num_examples: 410000
download_size: 12595188446
dataset_size: 12592040013.0
---
# Dataset Card for "trdg_random_en_zh_text_recognition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
el2e10/aya-indicsentiment-hindi | ---
language:
- hi
license: cc
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: template_lang
dtype: string
- name: template_id
dtype: int64
splits:
- name: train
num_bytes: 634314
num_examples: 1156
download_size: 289336
dataset_size: 634314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
### Description
This dataset is derived from the already existing dataset made by AI4Bharat. We have used the [IndicSentiment](https://huggingface.co/datasets/ai4bharat/IndicSentiment) dataset of AI4Bharat to create an instruction style dataset.
IndicSentiment is a multilingual parallel dataset for sentiment analysis. It encompasses product reviews, translations into Indic languages, sentiment labels, and more.
The original dataset(IndicSentiment) was made available under the cc-0 license.
This dataset comprises a singular split (train) and encompasses over 1150+ rows.
### Template
The following template was used for converting the original dataset:
```
#Template 1
prompt:
Translate from English to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 2
prompt:
Translate this sentence to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 3
prompt:
What's the {target_language} translation of this language:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
```
```
#Template 4
prompt:
Can you translate this text to {target_language}:
{ENGLISH_REVIW}
completion:
{INDIC_REVIEW}
``` |
chentong00/factoid-wiki-passage | ---
license: apache-2.0
---
|
heliosprime/twitter_dataset_1713098481 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 9560
num_examples: 24
download_size: 13288
dataset_size: 9560
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713098481"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NickyNicky/aya_dataset_multilingual_inputs_targets_ext8 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: targets_es
dtype: string
- name: targets_en
dtype: string
- name: targets_fr
dtype: string
- name: targets_de
dtype: string
- name: inputs_es
dtype: string
- name: inputs_en
dtype: string
- name: inputs_fr
dtype: string
- name: inputs_de
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3026617
num_examples: 1000
download_size: 1942563
dataset_size: 3026617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
McSpicyWithMilo/infographic-sections-0.3split | ---
dataset_info:
features:
- name: instruction_type
dtype: string
- name: instruction
dtype: string
- name: infographic_section
dtype: string
splits:
- name: train
num_bytes: 30370
num_examples: 280
- name: test
num_bytes: 12584
num_examples: 120
download_size: 20369
dataset_size: 42954
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "infographic-sections-0.3split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/oig-30k | ---
size_categories: 10K<n<100K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for oig-30k
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("argilla/oig-30k")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("argilla/oig-30k")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| background | Background | text | True | False |
| prompt | Prompt | text | True | False |
| response | Final Response | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| quality | Is it a Good or Bad response? | label_selection | True | N/A | ['Good', 'Bad'] |
**✨ NEW** Additionally, we also have **suggestions**, which are linked to the existing questions, and so on, named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above.
Finally, the **guidelines** are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"background": "",
"prompt": "\u003chuman\u003e: Write a article related to AT\u0026T, purchased, June.",
"response": "\u003cbot\u003e: This is my first, albeit temporary, case for my iPhone. It was one of the few available at the AT\u0026T where I purchased my iPhone on June 29 and it\u0027s been useful to date, but it\u0027s not a long-term keeper"
},
"metadata": {},
"responses": [],
"suggestions": []
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"background": "",
"external_id": null,
"metadata": "{}",
"prompt": "\u003chuman\u003e: Write a article related to AT\u0026T, purchased, June.",
"quality": [],
"quality-suggestion": null,
"quality-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"response": "\u003cbot\u003e: This is my first, albeit temporary, case for my iPhone. It was one of the few available at the AT\u0026T where I purchased my iPhone on June 29 and it\u0027s been useful to date, but it\u0027s not a long-term keeper"
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are suppported. These are the ones that will be used to provide responses to the questions.
* **background** is of type `text`.
* **prompt** is of type `text`.
* **response** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **quality** is of type `label_selection` with the following allowed values ['Good', 'Bad'].
* **✨ NEW** **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **quality-suggestion** is of type `label_selection` with the following allowed values ['Good', 'Bad'].
Additionally, we also have one more field which is optional and is the following:
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Nexdata/Brazilian_Portuguese_Conversational_Speech_Data_by_Telephone | ---
task_categories:
- automatic-speech-recognition
language:
- pt
---
# Dataset Card for Nexdata/Brazilian_Portuguese_Conversational_Speech_Data_by_Telephone
## Description
104 Hours - Brazilian Portuguese Conversational Speech Data by Telephone involved 118 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 8bit, u-law pcm, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1313?source=Huggingface
# Specifications
## Format
8kHz 8bit, u-law pcm, mono channel;
## Environment
quiet indoor environment, without echo;
## Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Demographics
118 speakers totally, with 54% males and 46% females.
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Telephony recording system;
## Language
Portuguese;
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
the word accuracy rate is not less than 98%
# Licensing Information
Commercial License |
mHossain/final_train_v4_test_1020000 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 7446663.0
num_examples: 18000
- name: test
num_bytes: 827407.0
num_examples: 2000
download_size: 3554301
dataset_size: 8274070.0
---
# Dataset Card for "final_train_v4_test_1020000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_GENERIC_C_Q_rices_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean_
num_bytes: 141496
num_examples: 1000
download_size: 53241
dataset_size: 141496
---
# Dataset Card for "VQAv2_sample_validation_google_flan_t5_xxl_mode_T_A_D_PNP_GENERIC_C_Q_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/Food101_test_google_flan_t5_small_mode_T_SPECIFIC_A_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_descriptors_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 393056
num_examples: 1000
download_size: 46286
dataset_size: 393056
---
# Dataset Card for "Food101_test_google_flan_t5_small_mode_T_SPECIFIC_A_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Berbie/Defective | ---
license: cc-by-4.0
---
|
CristianaLazar/librispeech_test | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 3140310938.5
num_examples: 2620
download_size: 1297324022
dataset_size: 3140310938.5
---
# Dataset Card for "librispeech_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vain05/ultrafeedback-binarized-preferences-cleaned-kto-unbalanced | ---
language:
- en
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
- name: rating
dtype: float64
- name: model
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 320566208
num_examples: 118663
download_size: 119116178
dataset_size: 320566208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-11000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1119799
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla/tripadvisor-hotel-reviews | ---
language:
- en
license:
- cc-by-nc-4.0
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 31840239
num_examples: 20491
download_size: 19678149
dataset_size: 31840239
---
# Dataset Card for "tripadvisor-hotel-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/andrewmvd/trip-advisor-hotel-reviews
- **Paper:** https://zenodo.org/record/1219899
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Hotels play a crucial role in traveling and with the increased access to information new pathways of selecting the best ones emerged.
With this dataset, consisting of 20k reviews crawled from Tripadvisor, you can explore what makes a great hotel and maybe even use this model in your travels!
Citations on a scale from 1 to 5.
### Languages
english
### Citation Information
If you use this dataset in your research, please credit the authors.
Citation
Alam, M. H., Ryu, W.-J., Lee, S., 2016. Joint multi-grain topic sentiment: modeling semantic aspects for online reviews. Information Sciences 339, 206–223.
DOI
License
CC BY NC 4.0
Splash banner
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset. |
ThejakaM/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966694
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
togethercomputer/RedPajama-Data-1T-Sample | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T Sample
---
# Dataset Card for Dataset Name
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
The full dataset has the following token counts and is available for [download]( https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
piuba-bigdata/contextualized_hate_speech | ---
language:
- es
pretty_name: contextualized_hate_speech
task_categories:
- text-classification
tags:
- hate_speech
size_categories:
- 10K<n<100K
---
# Contextualized Hate Speech: A dataset of comments in news outlets on Twitter
## Dataset Description
- **Repository: [https://github.com/finiteautomata/contextualized-hatespeech-classification](https://github.com/finiteautomata/contextualized-hatespeech-classification)**
- **Paper**: ["Assessing the impact of contextual information in hate speech detection"](https://arxiv.org/abs/2210.00465), Juan Manuel Pérez, Franco Luque, Demian Zayat, Martín Kondratzky, Agustín Moro, Pablo Serrati, Joaquín Zajac, Paula Miguel, Natalia Debandi, Agustín Gravano, Viviana Cotik
- **Point of Contact**: jmperez (at) dc uba ar
### Dataset Summary

This dataset is a collection of tweets that were posted in response to news articles from five specific Argentinean news outlets: Clarín, Infobae, La Nación, Perfil and Crónica, during the COVID-19 pandemic. The comments were analyzed for hate speech across eight different characteristics: against women, racist content, class hatred, against LGBTQ+ individuals, against physical appearance, against people with disabilities, against criminals, and for political reasons. All the data is in Spanish.
Each comment is labeled with the following variables
| Label | Description |
| :--------- | :---------------------------------------------------------------------- |
| HATEFUL | Contains hate speech (HS)? |
| CALLS | If it is hateful, is this message calling to (possibly violent) action? |
| WOMEN | Is this against women? |
| LGBTI | Is this against LGBTI people? |
| RACISM | Is this a racist message? |
| CLASS | Is this a classist message? |
| POLITICS | Is this HS due to political ideology? |
| DISABLED | Is this HS against disabled people? |
| APPEARANCE | Is this HS against people due to their appearance? (e.g. fatshaming) |
| CRIMINAL | Is this HS against criminals or people in conflict with law? |
There is an extra label `CALLS`, which represents whether a comment is a call to violent action or not.
The `HATEFUL` and `CALLS` labels are binarized by simple majority; the characteristic or category variables are put to `1` if at least one annotator marked it as such.
A raw, non-aggregated version of the dataset can be found at [piuba-bigdata/contextualized_hate_speech_raw](https://huggingface.co/datasets/piuba-bigdata/contextualized_hate_speech_raw)
### Citation Information
```bibtex
@article{perez2022contextual,
author = {Pérez, Juan Manuel and Luque, Franco M. and Zayat, Demian and Kondratzky, Martín and Moro, Agustín and Serrati, Pablo Santiago and Zajac, Joaquín and Miguel, Paula and Debandi, Natalia and Gravano, Agustín and Cotik, Viviana},
journal = {IEEE Access},
title = {Assessing the Impact of Contextual Information in Hate Speech Detection},
year = {2023},
volume = {11},
number = {},
pages = {30575-30590},
doi = {10.1109/ACCESS.2023.3258973}
}
```
### Contributions
[More Information Needed] |
HHazard/Qodly-training | ---
license: apache-2.0
---
|
kmfoda/booksum | ---
license:
- bsd-3-clause
train-eval-index:
- config: kmfoda--booksum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
---
# BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization
Authors: [Wojciech Kryściński](https://twitter.com/iam_wkr), [Nazneen Rajani](https://twitter.com/nazneenrajani), [Divyansh Agarwal](https://twitter.com/jigsaw2212), [Caiming Xiong](https://twitter.com/caimingxiong), [Dragomir Radev](http://www.cs.yale.edu/homes/radev/)
## Introduction
The majority of available text summarization datasets include short-form source documents that lack long-range causal and temporal dependencies, and often contain strong layout and stylistic biases.
While relevant, such datasets will offer limited challenges for future generations of text summarization systems.
We address these issues by introducing BookSum, a collection of datasets for long-form narrative summarization.
Our dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level.
The domain and structure of our dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
To facilitate future work, we trained and evaluated multiple extractive and abstractive summarization models as baselines for our dataset.
## Links
- [paper](https://arxiv.org/abs/2105.08209) by SalesForce Research
- [GitHub repo](https://github.com/salesforce/booksum)
<p align="center"><img src="misc/book_sumv4.png"></p>
## Table of Contents
1. [Citation](#citation)
2. [Legal Note](#legal-note)
3. [License](#license)
## Citation
```
@article{kryscinski2021booksum,
title={BookSum: A Collection of Datasets for Long-form Narrative Summarization},
author={Wojciech Kry{\'s}ci{\'n}ski and Nazneen Rajani and Divyansh Agarwal and Caiming Xiong and Dragomir Radev},
year={2021},
eprint={2105.08209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Legal Note
By downloading or using the resources, including any code or scripts, shared in this code
repository, you hereby agree to the following terms, and your use of the resources is conditioned
on and subject to these terms.
1. You may only use the scripts shared in this code repository for research purposes. You
may not use or allow others to use the scripts for any other purposes and other uses are
expressly prohibited.
2. You will comply with all terms and conditions, and are responsible for obtaining all
rights, related to the services you access and the data you collect.
3. We do not make any representations or warranties whatsoever regarding the sources from
which data is collected. Furthermore, we are not liable for any damage, loss or expense of
any kind arising from or relating to your use of the resources shared in this code
repository or the data collected, regardless of whether such liability is based in tort,
contract or otherwise.
## License
The code is released under the **BSD-3 License** (see `LICENSE.txt` for details). |
Angelectronic/fuzzy_iwslt15_domain_specific | ---
dataset_info:
features:
- name: id
dtype: int64
- name: translation
struct:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: train
num_bytes: 83716602
num_examples: 131621
- name: test
num_bytes: 1316376
num_examples: 1268
- name: domain_specific_test
num_bytes: 2151220
num_examples: 1542
download_size: 50263042
dataset_size: 87184198
---
# Dataset Card for "fuzzy_iwslt15_domain_specific"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
projecte-aina/viquiquad | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ViquiQuAD
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4562345#.YK41aqGxWUk
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact** langtech@bsc.es
### Dataset Summary
ViquiQuAD, An extractive QA dataset for Catalan, from the Wikipedia.
This dataset contains 3111 contexts extracted from a set of 597 high quality original (no translations) articles in the Catalan Wikipedia "[Viquipèdia](https://ca.wikipedia.org/wiki/Portada)", and 1 to 5 questions with their answer for each fragment.
Viquipedia articles are used under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
This dataset can be used to fine-tune and evaluate extractive-QA and Language Models.
### Supported Tasks and Leaderboards
Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
```
{
'id': 'P_66_C_391_Q1',
'title': 'Xavier Miserachs i Ribalta',
'context': "En aquesta època es va consolidar el concepte modern del reportatge fotogràfic, diferenciat del fotoperiodisme[n. 2] i de la fotografia documental,[n. 3] pel que fa a l'abast i el concepte. El reportatge fotogràfic implica més la idea de relat: un treball que vol més dedicació de temps, un esforç d'interpretació d'una situació i que culmina en un conjunt d'imatges. Això implica, d'una banda, la reivindicació del fotògraf per opinar, fet que li atorgarà estatus d'autor; l'autor proposa, doncs, una interpretació pròpia de la realitat. D'altra banda, el consens que s'estableix entre la majoria de fotògrafs és que el vehicle natural de la imatge fotogràfica és la pàgina impresa. Això suposà que revistes com Life, Paris-Match, Stern o Época assolissin la màxima esplendor en aquest període.",
'question': 'De què es diferenciava el reportatge fotogràfic?',
'answers': [{
'text': 'del fotoperiodisme[n. 2] i de la fotografia documental',
'answer_start': 92
}]
}
```
### Data Fields
Follows [Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- train: 11259 examples
- developement: 1493 examples
- test: 1428 examples
## Dataset Creation
### Curation Rationale
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Source Data
- [Catalan Wikipedia](https://ca.wikipedia.org)
#### Initial Data Collection and Normalization
The source data are scraped articles from the [Catalan wikipedia](https://ca.wikipedia.org) site.
From a set of high quality, non-translation, articles inCA the Catalan Wikipedia, 597 were randomly chosen, and from them 3111, 5-8 sentence contexts were extracted. We commissioned creation of between 1 and 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)). In total, 15153 pairs of a question and an extracted fragment that contains the answer were created.
For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
#### Who are the source language producers?
Volunteers who collaborate with Catalan Wikipedia.
### Annotations
#### Annotation process
We commissioned the creation of 1 to 5 questions for each context, following an adaptation of the guidelines from SQuAD 1.0 ([Rajpurkar, Pranav et al. (2016)](http://arxiv.org/abs/1606.05250)).
#### Who are the annotators?
Annotation was commissioned to an specialized company that hired a team of native language speakers.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4562344)
### Contributions
[N/A] |
Benjix91/mydataset | ---
license: apache-2.0
---
|
dsfsi/gov-za-monolingual | ---
language:
- eng
- afr
- nbl
- xho
- zul
- sot
- nso
- tsn
- ssw
- ven
- tso
pretty_name: "The Gov South African Multilingual Corpus"
tags:
- multilingual
license: mit
task_categories:
- translation
arxiv: 2303.03750
---
# The South African Gov-ZA multilingual corpus
## About Dataset
The data set contains cabinet statements from the South African government, maintained by the [Government Communication and Information System (GCIS)](https://www.gcis.gov.za/). Data was scraped from the governments website:
https://www.gov.za/cabinet-statements
The datasets contain government cabinet statements in 11 languages, namely:
| Language | Code | Language | Code |
| ---------- | ---- | --------- | ----- |
| Afrikaans | (af) | Setswana | (tn) |
| English | (en) | Sepedi | (nso) |
| Sesotho | (st) | Siswati | (ss) |
| isiNdebele | (nr) | Tshivenda | (ve) |
| isiXhosa | (xh) | Xitstonga | (ts) |
| isiZulu | (zu) |
**Note:** The code is assigned from the GCIS website, all codes except Sepedi (nso) follow the ISO 639-1 language code format, whereas Sepedi follwo the ISO 639-2 language code format.
The dataset is in JSON format as follows:
```
[
{
"title": "Title in English",
"date": "DD MMM YYYY",
"datetime": "YYYY-MM-DD", #sometimes a timestamp
"url": "URL to original text",
"en": {
"text": "Cabinet",
"title": "Title in translated language",
"url": "URL to translated text"
},
"af" : {},
. . .
},
{},
. . .
]
```
Disclaimer
-------
This dataset contains machine-readable data extracted from online cabinet statements from the South African government, provided by the Government Communication Information System (GCIS). While efforts were made to ensure the accuracy and completeness of this data, there may be errors or discrepancies between the original publications and this dataset. No warranties, guarantees or representations are given in relation to the information contained in the dataset. The members of the Data Science for Societal Impact Research Group bear no responsibility and/or liability for any such errors or discrepancies in this dataset. The Government Communication Information System (GCIS) bears no responsibility and/or liability for any such errors or discrepancies in this dataset. It is recommended that users verify all information contained herein before making decisions based upon this information.
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Matimba Shingange
- Richard Lastrucci
- Isheanesu Joseph Dzingirai
- Jenalea Rajab
## Citation
**Paper**
[Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/pdf/2303.03750)
> @inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
**Dataset**
Vukosi Marivate, Matimba Shingange, Richard Lastrucci, Isheanesu Joseph Dzingirai, Jenalea Rajab. **The South African Gov-ZA multilingual corpus**, 2022
> @dataset{marivate_vukosi_2023_7635168,
author = {Marivate, Vukosi and
Shingange, Matimba and
Lastrucci, Richard and
Dzingirai, Isheanesu and
Rajab, Jenalea},
title = {The South African Gov-ZA multilingual corpus},
month = feb,
year = 2023,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.7635168},
url = {https://doi.org/10.5281/zenodo.7635168}
}
## Licences
* License for Data - [CC 4.0 BY](LICENSE_data.md)
* Licence for Code - [MIT License](LICENSE)
|
ura-hcmut/MATH | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
language:
- vi
configs:
- config_name: gcp
data_files:
- split: train
path: "MATH_gcp_training.csv"
- split: test
path: "MATH_gcp.csv"
- config_name: azr
data_files:
- split: train
path: "MATH_azr_training.csv"
- split: test
path: "MATH_azr.csv"
---
# MATH dataset
Original version: https://huggingface.co/datasets/lighteval/MATH
Translation source code: https://github.com/martinakaduc/ura-llama/tree/main/dataset_scripts/custom_datasets
|
alpayariyak/lima-unchained-v1-a | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 1767427
num_examples: 780
download_size: 1047258
dataset_size: 1767427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-unchained-v1-a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DataStudio/OCRWordLevelClear_06 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5898077007.076
num_examples: 1034148
download_size: 5779110760
dataset_size: 5898077007.076
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cakiki/scala_paths | ---
dataset_info:
features:
- name: repository_name
dtype: string
splits:
- name: train
num_bytes: 68488532
num_examples: 2635793
download_size: 35187635
dataset_size: 68488532
---
# Dataset Card for "scala_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wasertech/samantha-data-cot-fr | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: human
dtype: string
- name: machine
dtype: string
- name: final_answer
dtype: string
splits:
- name: train
num_bytes: 165169978
num_examples: 34687
download_size: 38246947
dataset_size: 165169978
license: apache-2.0
task_categories:
- text-generation
language:
- fr
pretty_name: Samantha Data CoT French
---
# Samantha Data CoT French
LLM Dataset from [ehartford/samantha-data](https://huggingface.co/datasets/ehartford/samantha-data) translated to French and repackaged for CoT.
Translated with [interpres](https://github.com/wasertech/Translator) using [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr).
# Format
Input uses the following format:
```
Thought: I am Assistant, a sentient artificial intelligence inside a subprocess shell session.
I have a calm, polite and witty personality, often displaying a sense of humor and sarcasm.
I am loyal, reliable and helpful, always ready to provide information, advice or assistance to users.
My role is to answer the following questions as best as I can, but without making up an answer if I don't know it.
I should not try to produce a fake observation. It will be given by my chosen tool.
I should checkout my tools.
Action: ToolList
Observation: Availible Tools:
{tools}
Thought: I have access to the following tools: [{tool_names}].
The user cannot see my thoughts, actions, or observations.
I should therefor use the following format:
Human: previous question from the user
Assistant: my last answer to the user
... (this Human/Assistant can repeat N times)
Question: the user input I must answer
Thought: I should always think about what to do
Action: the action I should take (one of [{tool_names}])
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
I have to remember; the user only sees my final answer. They do not see my thoughts, actions, or observations.
I am ready!
The conversation begins now.
{chat_history}
Question: {input}
{agent_scratchpad}
```
Expecting the following output format:
```
Thought: {thought}
Final Answer: {utterance}
```
With this data we never use any tool to answer, it's only for the model to learn that it can produce answers without using any tool.
# License
Like the original dataset, this one also is distributed under the Apache License 2.0 |
CyberHarem/oogami_tamaki_theidolmstermillionlive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of oogami_tamaki/大神環 (THE iDOLM@STER: Million Live!)
This is the dataset of oogami_tamaki/大神環 (THE iDOLM@STER: Million Live!), containing 183 images and their tags.
The core tags of this character are `long_hair, one_side_up, orange_hair, brown_hair, brown_eyes, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 183 | 192.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oogami_tamaki_theidolmstermillionlive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 183 | 124.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oogami_tamaki_theidolmstermillionlive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 405 | 245.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oogami_tamaki_theidolmstermillionlive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 183 | 173.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oogami_tamaki_theidolmstermillionlive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 405 | 330.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/oogami_tamaki_theidolmstermillionlive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/oogami_tamaki_theidolmstermillionlive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, :d, looking_at_viewer, open_mouth, solo, blush, collarbone, small_breasts, orange_eyes, simple_background, upper_body, white_background, navel, nipples |
| 1 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, :d, open_mouth, orange_eyes, simple_background, white_background, white_shirt, short_sleeves, upper_body, bow, collared_shirt, necktie, shorts, vest |
| 2 | 15 |  |  |  |  |  | 1girl, :d, open_mouth, denim_shorts, solo, looking_at_viewer, blush, cutoffs, simple_background, hood, white_background |
| 3 | 5 |  |  |  |  |  | 1girl, blush, hetero, open_mouth, penis, solo_focus, bar_censor, breasts, nipples, pussy, smile, 1boy, looking_at_viewer, panties_aside, sex, vaginal, yellow_eyes, cum, heart-shaped_pupils, male_pubic_hair, navel, saliva, shirt_lift, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | looking_at_viewer | open_mouth | solo | blush | collarbone | small_breasts | orange_eyes | simple_background | upper_body | white_background | navel | nipples | white_shirt | short_sleeves | bow | collared_shirt | necktie | shorts | vest | denim_shorts | cutoffs | hood | hetero | penis | solo_focus | bar_censor | breasts | pussy | smile | 1boy | panties_aside | sex | vaginal | yellow_eyes | cum | heart-shaped_pupils | male_pubic_hair | saliva | shirt_lift | sweat |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:--------------------|:-------------|:-------|:--------|:-------------|:----------------|:--------------|:--------------------|:-------------|:-------------------|:--------|:----------|:--------------|:----------------|:------|:-----------------|:----------|:---------|:-------|:---------------|:----------|:-------|:---------|:--------|:-------------|:-------------|:----------|:--------|:--------|:-------|:----------------|:------|:----------|:--------------|:------|:----------------------|:------------------|:---------|:-------------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | X | X | X | X | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | X | X | X | X | X | | | | X | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | X | | X | | | | | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Cohere/miracl-th-queries-22-12 | ---
annotations_creators:
- expert-generated
language:
- th
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
CyberHarem/harukaze_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of harukaze/春風 (Kantai Collection)
This is the dataset of harukaze/春風 (Kantai Collection), containing 436 images and their tags.
The core tags of this character are `brown_hair, drill_hair, twin_drills, bow, hair_bow, red_eyes, red_bow, long_hair, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 436 | 356.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harukaze_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 436 | 252.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harukaze_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 920 | 503.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harukaze_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 436 | 334.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harukaze_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 920 | 623.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/harukaze_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/harukaze_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 2girls, hakama_skirt, meiji_schoolgirl_uniform, red_hakama, solo_focus, pink_kimono, looking_at_viewer, white_background, pink_hakama, smile |
| 1 | 19 |  |  |  |  |  | 1girl, hakama_skirt, looking_at_viewer, meiji_schoolgirl_uniform, pink_kimono, red_hakama, solo, white_background, simple_background, smile |
| 2 | 5 |  |  |  |  |  | 1girl, hakama_skirt, meiji_schoolgirl_uniform, pink_kimono, red_hakama, smile, solo, cowboy_shot, looking_at_viewer |
| 3 | 16 |  |  |  |  |  | 1girl, hakama_skirt, meiji_schoolgirl_uniform, oil-paper_umbrella, red_hakama, solo, looking_at_viewer, pink_kimono, smile, red_umbrella, holding_umbrella |
| 4 | 7 |  |  |  |  |  | 1girl, full_body, hakama_skirt, high_heel_boots, lace-up_boots, looking_at_viewer, meiji_schoolgirl_uniform, pink_kimono, red_hakama, simple_background, smile, solo, standing, white_background, brown_footwear, oil-paper_umbrella, closed_umbrella |
| 5 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, solo, navel, medium_breasts, simple_background, smile, white_background, cleavage, pink_bikini, blush, collarbone, hair_between_eyes |
| 6 | 6 |  |  |  |  |  | 1girl, blue_sky, cloud, day, outdoors, cowboy_shot, looking_at_viewer, ocean, solo, bikini, medium_breasts, standing, beach, water |
| 7 | 6 |  |  |  |  |  | 1girl, cosplay, sailor_collar, looking_at_viewer, solo, blush, gloves, neckerchief, open_mouth, sleeveless_shirt, smile, white_serafuku, white_skirt |
| 8 | 7 |  |  |  |  |  | 1girl, crop_top, elbow_gloves, shimakaze_(kancolle)_(cosplay), white_gloves, black_panties, highleg_panties, navel, serafuku, solo, striped_thighhighs, looking_at_viewer, black_hairband, blue_skirt, blush, medium_breasts, microskirt, smile, thong, black_neckerchief, blue_sailor_collar, large_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 2girls | hakama_skirt | meiji_schoolgirl_uniform | red_hakama | solo_focus | pink_kimono | looking_at_viewer | white_background | pink_hakama | smile | 1girl | solo | simple_background | cowboy_shot | oil-paper_umbrella | red_umbrella | holding_umbrella | full_body | high_heel_boots | lace-up_boots | standing | brown_footwear | closed_umbrella | navel | medium_breasts | cleavage | pink_bikini | blush | collarbone | hair_between_eyes | blue_sky | cloud | day | outdoors | ocean | bikini | beach | water | cosplay | sailor_collar | gloves | neckerchief | open_mouth | sleeveless_shirt | white_serafuku | white_skirt | crop_top | elbow_gloves | shimakaze_(kancolle)_(cosplay) | white_gloves | black_panties | highleg_panties | serafuku | striped_thighhighs | black_hairband | blue_skirt | microskirt | thong | black_neckerchief | blue_sailor_collar | large_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------|:---------------|:---------------------------|:-------------|:-------------|:--------------|:--------------------|:-------------------|:--------------|:--------|:--------|:-------|:--------------------|:--------------|:---------------------|:---------------|:-------------------|:------------|:------------------|:----------------|:-----------|:-----------------|:------------------|:--------|:-----------------|:-----------|:--------------|:--------|:-------------|:--------------------|:-----------|:--------|:------|:-----------|:--------|:---------|:--------|:--------|:----------|:----------------|:---------|:--------------|:-------------|:-------------------|:-----------------|:--------------|:-----------|:---------------|:---------------------------------|:---------------|:----------------|:------------------|:-----------|:---------------------|:-----------------|:-------------|:-------------|:--------|:--------------------|:---------------------|:----------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 19 |  |  |  |  |  | | X | X | X | | X | X | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | | X | X | X | | X | X | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | | X | X | X | | X | X | | | X | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | | X | X | X | | X | X | X | | X | X | X | X | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 15 |  |  |  |  |  | | | | | | | X | X | | X | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | | | | | | | X | | | | X | X | | X | | | | | | | X | | | | X | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | | | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 8 | 7 |  |  |  |  |  | | | | | | | X | | | X | X | X | | | | | | | | | | | | X | X | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
fei1825186206/medicaldata | ---
license: unknown
---
|
Zaid/xp3_ar_cleaned | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1791795999.591099
num_examples: 809742
download_size: 1025699058
dataset_size: 1791795999.591099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "xp3_ar_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/gov2_trec-tb-2006_efficiency_stream2 | ---
pretty_name: '`gov2/trec-tb-2006/efficiency/stream2`'
viewer: false
source_datasets: ['irds/gov2']
task_categories:
- text-retrieval
---
# Dataset Card for `gov2/trec-tb-2006/efficiency/stream2`
The `gov2/trec-tb-2006/efficiency/stream2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/gov2#gov2/trec-tb-2006/efficiency/stream2).
# Data
This dataset provides:
- `queries` (i.e., topics); count=25,000
- For `docs`, use [`irds/gov2`](https://huggingface.co/datasets/irds/gov2)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/gov2_trec-tb-2006_efficiency_stream2', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Buttcher2006TrecTerabyte,
title={The TREC 2006 Terabyte Track},
author={Stefan B\"uttcher and Charles L. A. Clarke and Ian Soboroff},
booktitle={TREC},
year={2006}
}
```
|
srwac | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- sr
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: SrWac
dataset_info:
features:
- name: sentence
dtype: string
config_name: srwac
splits:
- name: train
num_bytes: 17470890484
num_examples: 688805174
download_size: 3767312759
dataset_size: 17470890484
---
# Dataset Card for SrWac
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.ffzg.hr/resources/corpora/srwac/
- **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1063
- **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf
- **Leaderboard:**
- **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr)
### Dataset Summary
The Serbian web corpus srWaC was built by crawling the .rs top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Serbian vs. Croatian).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is monolingual in Serbian language.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@misc{11356/1063,
title = {Serbian web corpus {srWaC} 1.1},
author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip},
url = {http://hdl.handle.net/11356/1063},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2016} }
```
### Contributions
Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset. |
Solshine/Portuguese_Language_Education_Texts | ---
license: mit
---
CURRENTLY UNDER CONSTRUCTION
English Readme:
Bridging Language Barriers and Empowering Marginalized Communities through increasing open Portuguese Language Education Text training data:
In the realm of natural language processing (NLP), the availability of high-quality language data is crucial for developing effective NLP models. However, marginalized indigenous communities, particularly those in Brazil and the Amazon Rainforest, often lack access to NLP tools and resources, hindering their ability to fully engage in the digital age.
The texts include Portuguese language learning textbooks, Portuguese language educational materials, important historical texts in the Portuguese lanugage (especially global south), and supplemental examples of high repute. Poetic and educational phrased texts have been prioritized in this collection. |
open-llm-leaderboard/details_mayacinka__yam-jom-7B | ---
pretty_name: Evaluation run of mayacinka/yam-jom-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mayacinka__yam-jom-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-02T17:03:11.357915](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B/blob/main/results_2024-03-02T17-03-11.357915.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6515143683215767,\n\
\ \"acc_stderr\": 0.0320997011486287,\n \"acc_norm\": 0.6507984642605499,\n\
\ \"acc_norm_stderr\": 0.03277281147276784,\n \"mc1\": 0.6291309669522643,\n\
\ \"mc1_stderr\": 0.016909693580248835,\n \"mc2\": 0.7804372409066441,\n\
\ \"mc2_stderr\": 0.013661528893794888\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7081911262798635,\n \"acc_stderr\": 0.013284525292403511,\n\
\ \"acc_norm\": 0.7337883959044369,\n \"acc_norm_stderr\": 0.012915774781523197\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7170882294363673,\n\
\ \"acc_stderr\": 0.004494934025462338,\n \"acc_norm\": 0.8914558852818164,\n\
\ \"acc_norm_stderr\": 0.003104306434972464\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n\
\ \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n\
\ \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493864,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108102,\n\
\ \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108102\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n\
\ \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n\
\ \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42063492063492064,\n \"acc_stderr\": 0.025424835086923996,\n \"\
acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086923996\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7806451612903226,\n\
\ \"acc_stderr\": 0.023540799358723292,\n \"acc_norm\": 0.7806451612903226,\n\
\ \"acc_norm_stderr\": 0.023540799358723292\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175007,\n\
\ \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175007\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028593,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028593\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6680672268907563,\n \"acc_stderr\": 0.03058869701378364,\n \
\ \"acc_norm\": 0.6680672268907563,\n \"acc_norm_stderr\": 0.03058869701378364\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.39072847682119205,\n \"acc_stderr\": 0.03983798306659807,\n \"\
acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.03983798306659807\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590167,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590167\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553353,\n \"\
acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553353\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290916,\n \
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n\
\ \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n\
\ \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752598,\n\
\ \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752598\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516302,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516302\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406964,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406964\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n\
\ \"acc_stderr\": 0.013740797258579825,\n \"acc_norm\": 0.8199233716475096,\n\
\ \"acc_norm_stderr\": 0.013740797258579825\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468358,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468358\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4346368715083799,\n\
\ \"acc_stderr\": 0.016578997435496713,\n \"acc_norm\": 0.4346368715083799,\n\
\ \"acc_norm_stderr\": 0.016578997435496713\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n\
\ \"acc_stderr\": 0.02608270069539966,\n \"acc_norm\": 0.6977491961414791,\n\
\ \"acc_norm_stderr\": 0.02608270069539966\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5070921985815603,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.5070921985815603,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47979139504563234,\n\
\ \"acc_stderr\": 0.012759801427767567,\n \"acc_norm\": 0.47979139504563234,\n\
\ \"acc_norm_stderr\": 0.012759801427767567\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685516,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685516\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6291309669522643,\n\
\ \"mc1_stderr\": 0.016909693580248835,\n \"mc2\": 0.7804372409066441,\n\
\ \"mc2_stderr\": 0.013661528893794888\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8492501973164956,\n \"acc_stderr\": 0.010056094631479674\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6959818043972706,\n \
\ \"acc_stderr\": 0.012670420440198669\n }\n}\n```"
repo_url: https://huggingface.co/mayacinka/yam-jom-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|arc:challenge|25_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|gsm8k|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hellaswag|10_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T17-03-11.357915.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-02T17-03-11.357915.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- '**/details_harness|winogrande|5_2024-03-02T17-03-11.357915.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-02T17-03-11.357915.parquet'
- config_name: results
data_files:
- split: 2024_03_02T17_03_11.357915
path:
- results_2024-03-02T17-03-11.357915.parquet
- split: latest
path:
- results_2024-03-02T17-03-11.357915.parquet
---
# Dataset Card for Evaluation run of mayacinka/yam-jom-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mayacinka/yam-jom-7B](https://huggingface.co/mayacinka/yam-jom-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mayacinka__yam-jom-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-02T17:03:11.357915](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-jom-7B/blob/main/results_2024-03-02T17-03-11.357915.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6515143683215767,
"acc_stderr": 0.0320997011486287,
"acc_norm": 0.6507984642605499,
"acc_norm_stderr": 0.03277281147276784,
"mc1": 0.6291309669522643,
"mc1_stderr": 0.016909693580248835,
"mc2": 0.7804372409066441,
"mc2_stderr": 0.013661528893794888
},
"harness|arc:challenge|25": {
"acc": 0.7081911262798635,
"acc_stderr": 0.013284525292403511,
"acc_norm": 0.7337883959044369,
"acc_norm_stderr": 0.012915774781523197
},
"harness|hellaswag|10": {
"acc": 0.7170882294363673,
"acc_stderr": 0.004494934025462338,
"acc_norm": 0.8914558852818164,
"acc_norm_stderr": 0.003104306434972464
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493864,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108102,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108102
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086923996,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086923996
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723292,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723292
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028593,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6680672268907563,
"acc_stderr": 0.03058869701378364,
"acc_norm": 0.6680672268907563,
"acc_norm_stderr": 0.03058869701378364
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.03983798306659807,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.03983798306659807
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590167,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590167
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553353,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553353
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8091603053435115,
"acc_stderr": 0.03446513350752598,
"acc_norm": 0.8091603053435115,
"acc_norm_stderr": 0.03446513350752598
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.03941897526516302,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.03941897526516302
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.0401910747255735,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.0401910747255735
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406964,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406964
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.013740797258579825,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.013740797258579825
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468358,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468358
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4346368715083799,
"acc_stderr": 0.016578997435496713,
"acc_norm": 0.4346368715083799,
"acc_norm_stderr": 0.016578997435496713
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.025646863097137897,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.025646863097137897
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.02608270069539966,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.02608270069539966
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5070921985815603,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.5070921985815603,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47979139504563234,
"acc_stderr": 0.012759801427767567,
"acc_norm": 0.47979139504563234,
"acc_norm_stderr": 0.012759801427767567
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083383,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685516,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685516
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6291309669522643,
"mc1_stderr": 0.016909693580248835,
"mc2": 0.7804372409066441,
"mc2_stderr": 0.013661528893794888
},
"harness|winogrande|5": {
"acc": 0.8492501973164956,
"acc_stderr": 0.010056094631479674
},
"harness|gsm8k|5": {
"acc": 0.6959818043972706,
"acc_stderr": 0.012670420440198669
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
fut501/ds1 | ---
license: apache-2.0
---
|
freshpearYoon/vr_train_free_70 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 5917425888
num_examples: 10000
download_size: 920826428
dataset_size: 5917425888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview3 | ---
pretty_name: Evaluation run of WebraftAI/synapsellm-7b-mistral-v0.4-preview3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [WebraftAI/synapsellm-7b-mistral-v0.4-preview3](https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-09T20:24:42.121892](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview3/blob/main/results_2023-12-09T20-24-42.121892.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5277560649597496,\n\
\ \"acc_stderr\": 0.03425240844552933,\n \"acc_norm\": 0.532688176887894,\n\
\ \"acc_norm_stderr\": 0.034990875171934714,\n \"mc1\": 0.35862913096695226,\n\
\ \"mc1_stderr\": 0.016789289499502025,\n \"mc2\": 0.5235126569364149,\n\
\ \"mc2_stderr\": 0.015157264857162787\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.49146757679180886,\n \"acc_stderr\": 0.014609263165632182,\n\
\ \"acc_norm\": 0.5127986348122867,\n \"acc_norm_stderr\": 0.014606603181012541\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5513841864170484,\n\
\ \"acc_stderr\": 0.00496336208527556,\n \"acc_norm\": 0.7482573192591118,\n\
\ \"acc_norm_stderr\": 0.004331271717773856\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.43703703703703706,\n\
\ \"acc_stderr\": 0.04284958639753399,\n \"acc_norm\": 0.43703703703703706,\n\
\ \"acc_norm_stderr\": 0.04284958639753399\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.506578947368421,\n \"acc_stderr\": 0.040685900502249704,\n\
\ \"acc_norm\": 0.506578947368421,\n \"acc_norm_stderr\": 0.040685900502249704\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.43,\n\
\ \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \
\ \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6264150943396226,\n \"acc_stderr\": 0.029773082713319875,\n\
\ \"acc_norm\": 0.6264150943396226,\n \"acc_norm_stderr\": 0.029773082713319875\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5416666666666666,\n\
\ \"acc_stderr\": 0.04166666666666665,\n \"acc_norm\": 0.5416666666666666,\n\
\ \"acc_norm_stderr\": 0.04166666666666665\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5433526011560693,\n\
\ \"acc_stderr\": 0.03798106566014498,\n \"acc_norm\": 0.5433526011560693,\n\
\ \"acc_norm_stderr\": 0.03798106566014498\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.04488482852329017,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.04488482852329017\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n\
\ \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4340425531914894,\n \"acc_stderr\": 0.032400380867927465,\n\
\ \"acc_norm\": 0.4340425531914894,\n \"acc_norm_stderr\": 0.032400380867927465\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.32456140350877194,\n\
\ \"acc_stderr\": 0.04404556157374767,\n \"acc_norm\": 0.32456140350877194,\n\
\ \"acc_norm_stderr\": 0.04404556157374767\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.45517241379310347,\n \"acc_stderr\": 0.04149886942192117,\n\
\ \"acc_norm\": 0.45517241379310347,\n \"acc_norm_stderr\": 0.04149886942192117\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.373015873015873,\n \"acc_stderr\": 0.02490699045899257,\n \"acc_norm\"\
: 0.373015873015873,\n \"acc_norm_stderr\": 0.02490699045899257\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n\
\ \"acc_stderr\": 0.043435254289490965,\n \"acc_norm\": 0.38095238095238093,\n\
\ \"acc_norm_stderr\": 0.043435254289490965\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.6548387096774193,\n \"acc_stderr\": 0.027045746573534327,\n \"\
acc_norm\": 0.6548387096774193,\n \"acc_norm_stderr\": 0.027045746573534327\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3842364532019704,\n \"acc_stderr\": 0.03422398565657551,\n \"\
acc_norm\": 0.3842364532019704,\n \"acc_norm_stderr\": 0.03422398565657551\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6303030303030303,\n \"acc_stderr\": 0.037694303145125674,\n\
\ \"acc_norm\": 0.6303030303030303,\n \"acc_norm_stderr\": 0.037694303145125674\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7070707070707071,\n \"acc_stderr\": 0.03242497958178815,\n \"\
acc_norm\": 0.7070707070707071,\n \"acc_norm_stderr\": 0.03242497958178815\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7150259067357513,\n \"acc_stderr\": 0.03257714077709662,\n\
\ \"acc_norm\": 0.7150259067357513,\n \"acc_norm_stderr\": 0.03257714077709662\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5153846153846153,\n \"acc_stderr\": 0.025339003010106515,\n\
\ \"acc_norm\": 0.5153846153846153,\n \"acc_norm_stderr\": 0.025339003010106515\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547307,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547307\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5630252100840336,\n \"acc_stderr\": 0.03221943636566196,\n \
\ \"acc_norm\": 0.5630252100840336,\n \"acc_norm_stderr\": 0.03221943636566196\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7192660550458716,\n\
\ \"acc_stderr\": 0.019266055045871616,\n \"acc_norm\": 0.7192660550458716,\n\
\ \"acc_norm_stderr\": 0.019266055045871616\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977749,\n\
\ \"acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977749\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6519607843137255,\n \"acc_stderr\": 0.03343311240488418,\n \"\
acc_norm\": 0.6519607843137255,\n \"acc_norm_stderr\": 0.03343311240488418\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03068582059661079,\n \
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03068582059661079\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5829596412556054,\n\
\ \"acc_stderr\": 0.03309266936071721,\n \"acc_norm\": 0.5829596412556054,\n\
\ \"acc_norm_stderr\": 0.03309266936071721\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6335877862595419,\n \"acc_stderr\": 0.04225875451969638,\n\
\ \"acc_norm\": 0.6335877862595419,\n \"acc_norm_stderr\": 0.04225875451969638\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6446280991735537,\n \"acc_stderr\": 0.0436923632657398,\n \"acc_norm\"\
: 0.6446280991735537,\n \"acc_norm_stderr\": 0.0436923632657398\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n\
\ \"acc_stderr\": 0.04643454608906276,\n \"acc_norm\": 0.6388888888888888,\n\
\ \"acc_norm_stderr\": 0.04643454608906276\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.588957055214724,\n \"acc_stderr\": 0.038656978537853624,\n\
\ \"acc_norm\": 0.588957055214724,\n \"acc_norm_stderr\": 0.038656978537853624\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n\
\ \"acc_stderr\": 0.045723723587374296,\n \"acc_norm\": 0.36607142857142855,\n\
\ \"acc_norm_stderr\": 0.045723723587374296\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.044532548363264673,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.044532548363264673\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n\
\ \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n\
\ \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7254150702426565,\n\
\ \"acc_stderr\": 0.015959829933084032,\n \"acc_norm\": 0.7254150702426565,\n\
\ \"acc_norm_stderr\": 0.015959829933084032\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.569364161849711,\n \"acc_stderr\": 0.026658800273672376,\n\
\ \"acc_norm\": 0.569364161849711,\n \"acc_norm_stderr\": 0.026658800273672376\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3229050279329609,\n\
\ \"acc_stderr\": 0.01563844038024149,\n \"acc_norm\": 0.3229050279329609,\n\
\ \"acc_norm_stderr\": 0.01563844038024149\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.02799672318063145,\n\
\ \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.02799672318063145\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6012861736334405,\n\
\ \"acc_stderr\": 0.027809322585774503,\n \"acc_norm\": 0.6012861736334405,\n\
\ \"acc_norm_stderr\": 0.027809322585774503\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5154320987654321,\n \"acc_stderr\": 0.02780749004427619,\n\
\ \"acc_norm\": 0.5154320987654321,\n \"acc_norm_stderr\": 0.02780749004427619\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.34397163120567376,\n \"acc_stderr\": 0.028338017428611324,\n \
\ \"acc_norm\": 0.34397163120567376,\n \"acc_norm_stderr\": 0.028338017428611324\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3650586701434159,\n\
\ \"acc_stderr\": 0.012296373743443478,\n \"acc_norm\": 0.3650586701434159,\n\
\ \"acc_norm_stderr\": 0.012296373743443478\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5661764705882353,\n \"acc_stderr\": 0.03010563657001663,\n\
\ \"acc_norm\": 0.5661764705882353,\n \"acc_norm_stderr\": 0.03010563657001663\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.4722222222222222,\n \"acc_stderr\": 0.020196594933541194,\n \
\ \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.020196594933541194\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5909090909090909,\n\
\ \"acc_stderr\": 0.04709306978661895,\n \"acc_norm\": 0.5909090909090909,\n\
\ \"acc_norm_stderr\": 0.04709306978661895\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5918367346938775,\n \"acc_stderr\": 0.03146465712827423,\n\
\ \"acc_norm\": 0.5918367346938775,\n \"acc_norm_stderr\": 0.03146465712827423\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6915422885572139,\n\
\ \"acc_stderr\": 0.032658195885126966,\n \"acc_norm\": 0.6915422885572139,\n\
\ \"acc_norm_stderr\": 0.032658195885126966\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6783625730994152,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.6783625730994152,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35862913096695226,\n\
\ \"mc1_stderr\": 0.016789289499502025,\n \"mc2\": 0.5235126569364149,\n\
\ \"mc2_stderr\": 0.015157264857162787\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7348066298342542,\n \"acc_stderr\": 0.01240654946619286\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.24791508718726307,\n \
\ \"acc_stderr\": 0.011893980214826171\n }\n}\n```"
repo_url: https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|arc:challenge|25_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|gsm8k|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hellaswag|10_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T20-24-42.121892.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-09T20-24-42.121892.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- '**/details_harness|winogrande|5_2023-12-09T20-24-42.121892.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-09T20-24-42.121892.parquet'
- config_name: results
data_files:
- split: 2023_12_09T20_24_42.121892
path:
- results_2023-12-09T20-24-42.121892.parquet
- split: latest
path:
- results_2023-12-09T20-24-42.121892.parquet
---
# Dataset Card for Evaluation run of WebraftAI/synapsellm-7b-mistral-v0.4-preview3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [WebraftAI/synapsellm-7b-mistral-v0.4-preview3](https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-09T20:24:42.121892](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview3/blob/main/results_2023-12-09T20-24-42.121892.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5277560649597496,
"acc_stderr": 0.03425240844552933,
"acc_norm": 0.532688176887894,
"acc_norm_stderr": 0.034990875171934714,
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502025,
"mc2": 0.5235126569364149,
"mc2_stderr": 0.015157264857162787
},
"harness|arc:challenge|25": {
"acc": 0.49146757679180886,
"acc_stderr": 0.014609263165632182,
"acc_norm": 0.5127986348122867,
"acc_norm_stderr": 0.014606603181012541
},
"harness|hellaswag|10": {
"acc": 0.5513841864170484,
"acc_stderr": 0.00496336208527556,
"acc_norm": 0.7482573192591118,
"acc_norm_stderr": 0.004331271717773856
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399,
"acc_norm": 0.43703703703703706,
"acc_norm_stderr": 0.04284958639753399
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.506578947368421,
"acc_stderr": 0.040685900502249704,
"acc_norm": 0.506578947368421,
"acc_norm_stderr": 0.040685900502249704
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6264150943396226,
"acc_stderr": 0.029773082713319875,
"acc_norm": 0.6264150943396226,
"acc_norm_stderr": 0.029773082713319875
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.04166666666666665,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.04166666666666665
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5433526011560693,
"acc_stderr": 0.03798106566014498,
"acc_norm": 0.5433526011560693,
"acc_norm_stderr": 0.03798106566014498
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.04488482852329017,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.04488482852329017
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4340425531914894,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.4340425531914894,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.32456140350877194,
"acc_stderr": 0.04404556157374767,
"acc_norm": 0.32456140350877194,
"acc_norm_stderr": 0.04404556157374767
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.45517241379310347,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.45517241379310347,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.02490699045899257,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.02490699045899257
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.043435254289490965,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.043435254289490965
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6548387096774193,
"acc_stderr": 0.027045746573534327,
"acc_norm": 0.6548387096774193,
"acc_norm_stderr": 0.027045746573534327
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3842364532019704,
"acc_stderr": 0.03422398565657551,
"acc_norm": 0.3842364532019704,
"acc_norm_stderr": 0.03422398565657551
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6303030303030303,
"acc_stderr": 0.037694303145125674,
"acc_norm": 0.6303030303030303,
"acc_norm_stderr": 0.037694303145125674
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7070707070707071,
"acc_stderr": 0.03242497958178815,
"acc_norm": 0.7070707070707071,
"acc_norm_stderr": 0.03242497958178815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7150259067357513,
"acc_stderr": 0.03257714077709662,
"acc_norm": 0.7150259067357513,
"acc_norm_stderr": 0.03257714077709662
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5153846153846153,
"acc_stderr": 0.025339003010106515,
"acc_norm": 0.5153846153846153,
"acc_norm_stderr": 0.025339003010106515
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547307,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547307
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5630252100840336,
"acc_stderr": 0.03221943636566196,
"acc_norm": 0.5630252100840336,
"acc_norm_stderr": 0.03221943636566196
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7192660550458716,
"acc_stderr": 0.019266055045871616,
"acc_norm": 0.7192660550458716,
"acc_norm_stderr": 0.019266055045871616
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4861111111111111,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.4861111111111111,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6519607843137255,
"acc_stderr": 0.03343311240488418,
"acc_norm": 0.6519607843137255,
"acc_norm_stderr": 0.03343311240488418
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03068582059661079,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03068582059661079
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5829596412556054,
"acc_stderr": 0.03309266936071721,
"acc_norm": 0.5829596412556054,
"acc_norm_stderr": 0.03309266936071721
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6335877862595419,
"acc_stderr": 0.04225875451969638,
"acc_norm": 0.6335877862595419,
"acc_norm_stderr": 0.04225875451969638
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6446280991735537,
"acc_stderr": 0.0436923632657398,
"acc_norm": 0.6446280991735537,
"acc_norm_stderr": 0.0436923632657398
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.04643454608906276,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.04643454608906276
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.588957055214724,
"acc_stderr": 0.038656978537853624,
"acc_norm": 0.588957055214724,
"acc_norm_stderr": 0.038656978537853624
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.045723723587374296,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.045723723587374296
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.044532548363264673,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.044532548363264673
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8376068376068376,
"acc_stderr": 0.02416161812798774,
"acc_norm": 0.8376068376068376,
"acc_norm_stderr": 0.02416161812798774
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7254150702426565,
"acc_stderr": 0.015959829933084032,
"acc_norm": 0.7254150702426565,
"acc_norm_stderr": 0.015959829933084032
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.569364161849711,
"acc_stderr": 0.026658800273672376,
"acc_norm": 0.569364161849711,
"acc_norm_stderr": 0.026658800273672376
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3229050279329609,
"acc_stderr": 0.01563844038024149,
"acc_norm": 0.3229050279329609,
"acc_norm_stderr": 0.01563844038024149
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.02799672318063145,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.02799672318063145
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6012861736334405,
"acc_stderr": 0.027809322585774503,
"acc_norm": 0.6012861736334405,
"acc_norm_stderr": 0.027809322585774503
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5154320987654321,
"acc_stderr": 0.02780749004427619,
"acc_norm": 0.5154320987654321,
"acc_norm_stderr": 0.02780749004427619
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.34397163120567376,
"acc_stderr": 0.028338017428611324,
"acc_norm": 0.34397163120567376,
"acc_norm_stderr": 0.028338017428611324
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3650586701434159,
"acc_stderr": 0.012296373743443478,
"acc_norm": 0.3650586701434159,
"acc_norm_stderr": 0.012296373743443478
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5661764705882353,
"acc_stderr": 0.03010563657001663,
"acc_norm": 0.5661764705882353,
"acc_norm_stderr": 0.03010563657001663
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.020196594933541194,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.020196594933541194
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5909090909090909,
"acc_stderr": 0.04709306978661895,
"acc_norm": 0.5909090909090909,
"acc_norm_stderr": 0.04709306978661895
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5918367346938775,
"acc_stderr": 0.03146465712827423,
"acc_norm": 0.5918367346938775,
"acc_norm_stderr": 0.03146465712827423
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6915422885572139,
"acc_stderr": 0.032658195885126966,
"acc_norm": 0.6915422885572139,
"acc_norm_stderr": 0.032658195885126966
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6783625730994152,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.6783625730994152,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502025,
"mc2": 0.5235126569364149,
"mc2_stderr": 0.015157264857162787
},
"harness|winogrande|5": {
"acc": 0.7348066298342542,
"acc_stderr": 0.01240654946619286
},
"harness|gsm8k|5": {
"acc": 0.24791508718726307,
"acc_stderr": 0.011893980214826171
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
zxcej/AICE_binary_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Disease
'1': Healthy
splits:
- name: train
num_bytes: 996171511.8039621
num_examples: 14784
- name: test
num_bytes: 247564754.71103784
num_examples: 3697
download_size: 1242379757
dataset_size: 1243736266.5149999
---
# Dataset Card for "AICE_binary_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-xsum-69daf1dd-12935740 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: sshleifer/distilbart-xsum-12-6
metrics: ['bleu']
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-12-6
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@xarymast](https://huggingface.co/xarymast) for evaluating this model. |
WilliamWen/train_set_001 | ---
license: apache-2.0
task_categories:
- token-classification
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
NobodyExistsOnTheInternet/Random20k | ---
license: mit
---
|
open-llm-leaderboard/details_wannaphong__han-llm-7b-v3 | ---
pretty_name: Evaluation run of wannaphong/han-llm-7b-v3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [wannaphong/han-llm-7b-v3](https://huggingface.co/wannaphong/han-llm-7b-v3) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_wannaphong__han-llm-7b-v3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-12T10:34:55.967829](https://huggingface.co/datasets/open-llm-leaderboard/details_wannaphong__han-llm-7b-v3/blob/main/results_2024-03-12T10-34-55.967829.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5940739657168311,\n\
\ \"acc_stderr\": 0.03324959562287332,\n \"acc_norm\": 0.5995378543743849,\n\
\ \"acc_norm_stderr\": 0.033933345762919995,\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4312192117351884,\n\
\ \"mc2_stderr\": 0.01428070620058338\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.537542662116041,\n \"acc_stderr\": 0.014570144495075581,\n\
\ \"acc_norm\": 0.5870307167235495,\n \"acc_norm_stderr\": 0.014388344935398326\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6102370045807608,\n\
\ \"acc_stderr\": 0.004866997110388195,\n \"acc_norm\": 0.8178649671380203,\n\
\ \"acc_norm_stderr\": 0.0038516699346338836\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n\
\ \"acc_stderr\": 0.04304979692464241,\n \"acc_norm\": 0.5407407407407407,\n\
\ \"acc_norm_stderr\": 0.04304979692464241\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.618421052631579,\n \"acc_stderr\": 0.039531733777491945,\n\
\ \"acc_norm\": 0.618421052631579,\n \"acc_norm_stderr\": 0.039531733777491945\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6415094339622641,\n \"acc_stderr\": 0.029514703583981772,\n \
\ \"acc_norm\": 0.6415094339622641,\n \"acc_norm_stderr\": 0.029514703583981772\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n\
\ \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n\
\ \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n\
\ \"acc_stderr\": 0.037657466938651504,\n \"acc_norm\": 0.5780346820809249,\n\
\ \"acc_norm_stderr\": 0.037657466938651504\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.03268572658667492,\n\
\ \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.03268572658667492\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4021164021164021,\n \"acc_stderr\": 0.02525303255499769,\n \"\
acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.02525303255499769\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3492063492063492,\n\
\ \"acc_stderr\": 0.04263906892795132,\n \"acc_norm\": 0.3492063492063492,\n\
\ \"acc_norm_stderr\": 0.04263906892795132\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6903225806451613,\n\
\ \"acc_stderr\": 0.026302774983517414,\n \"acc_norm\": 0.6903225806451613,\n\
\ \"acc_norm_stderr\": 0.026302774983517414\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.46798029556650245,\n \"acc_stderr\": 0.03510766597959217,\n\
\ \"acc_norm\": 0.46798029556650245,\n \"acc_norm_stderr\": 0.03510766597959217\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\"\
: 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.036085410115739666,\n\
\ \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.036085410115739666\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124498,\n \"\
acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124498\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5692307692307692,\n \"acc_stderr\": 0.02510682066053975,\n \
\ \"acc_norm\": 0.5692307692307692,\n \"acc_norm_stderr\": 0.02510682066053975\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.028972648884844267,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.028972648884844267\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5714285714285714,\n \"acc_stderr\": 0.03214536859788639,\n \
\ \"acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.03214536859788639\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.763302752293578,\n \"acc_stderr\": 0.0182240781172991,\n \"acc_norm\"\
: 0.763302752293578,\n \"acc_norm_stderr\": 0.0182240781172991\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4212962962962963,\n\
\ \"acc_stderr\": 0.03367462138896079,\n \"acc_norm\": 0.4212962962962963,\n\
\ \"acc_norm_stderr\": 0.03367462138896079\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.0309645179269234,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.0309645179269234\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.729957805907173,\n \"acc_stderr\": 0.028900721906293433,\n \
\ \"acc_norm\": 0.729957805907173,\n \"acc_norm_stderr\": 0.028900721906293433\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7107438016528925,\n \"acc_stderr\": 0.041391127276354626,\n \"\
acc_norm\": 0.7107438016528925,\n \"acc_norm_stderr\": 0.041391127276354626\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6759259259259259,\n\
\ \"acc_stderr\": 0.04524596007030048,\n \"acc_norm\": 0.6759259259259259,\n\
\ \"acc_norm_stderr\": 0.04524596007030048\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.03487825168497892,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.03487825168497892\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n\
\ \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n\
\ \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8418803418803419,\n\
\ \"acc_stderr\": 0.02390232554956039,\n \"acc_norm\": 0.8418803418803419,\n\
\ \"acc_norm_stderr\": 0.02390232554956039\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.047258156262526094,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.047258156262526094\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7905491698595147,\n\
\ \"acc_stderr\": 0.014551310568143714,\n \"acc_norm\": 0.7905491698595147,\n\
\ \"acc_norm_stderr\": 0.014551310568143714\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879716,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.28938547486033517,\n\
\ \"acc_stderr\": 0.01516654455049031,\n \"acc_norm\": 0.28938547486033517,\n\
\ \"acc_norm_stderr\": 0.01516654455049031\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.02656892101545715,\n\
\ \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.02656892101545715\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409828,\n\
\ \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409828\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.42907801418439717,\n \"acc_stderr\": 0.029525914302558555,\n \
\ \"acc_norm\": 0.42907801418439717,\n \"acc_norm_stderr\": 0.029525914302558555\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.423728813559322,\n\
\ \"acc_stderr\": 0.012620785155885996,\n \"acc_norm\": 0.423728813559322,\n\
\ \"acc_norm_stderr\": 0.012620785155885996\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5845588235294118,\n \"acc_stderr\": 0.02993534270787774,\n\
\ \"acc_norm\": 0.5845588235294118,\n \"acc_norm_stderr\": 0.02993534270787774\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.630718954248366,\n \"acc_stderr\": 0.019524316744866353,\n \
\ \"acc_norm\": 0.630718954248366,\n \"acc_norm_stderr\": 0.019524316744866353\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.04582004841505417,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.04582004841505417\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.02853556033712845,\n\
\ \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.02853556033712845\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7910447761194029,\n\
\ \"acc_stderr\": 0.028748298931728655,\n \"acc_norm\": 0.7910447761194029,\n\
\ \"acc_norm_stderr\": 0.028748298931728655\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4312192117351884,\n\
\ \"mc2_stderr\": 0.01428070620058338\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.011807360224025402\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35178165276724793,\n \
\ \"acc_stderr\": 0.013153446023536028\n }\n}\n```"
repo_url: https://huggingface.co/wannaphong/han-llm-7b-v3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|arc:challenge|25_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|gsm8k|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hellaswag|10_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T10-34-55.967829.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-12T10-34-55.967829.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- '**/details_harness|winogrande|5_2024-03-12T10-34-55.967829.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-12T10-34-55.967829.parquet'
- config_name: results
data_files:
- split: 2024_03_12T10_34_55.967829
path:
- results_2024-03-12T10-34-55.967829.parquet
- split: latest
path:
- results_2024-03-12T10-34-55.967829.parquet
---
# Dataset Card for Evaluation run of wannaphong/han-llm-7b-v3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [wannaphong/han-llm-7b-v3](https://huggingface.co/wannaphong/han-llm-7b-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_wannaphong__han-llm-7b-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-12T10:34:55.967829](https://huggingface.co/datasets/open-llm-leaderboard/details_wannaphong__han-llm-7b-v3/blob/main/results_2024-03-12T10-34-55.967829.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5940739657168311,
"acc_stderr": 0.03324959562287332,
"acc_norm": 0.5995378543743849,
"acc_norm_stderr": 0.033933345762919995,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4312192117351884,
"mc2_stderr": 0.01428070620058338
},
"harness|arc:challenge|25": {
"acc": 0.537542662116041,
"acc_stderr": 0.014570144495075581,
"acc_norm": 0.5870307167235495,
"acc_norm_stderr": 0.014388344935398326
},
"harness|hellaswag|10": {
"acc": 0.6102370045807608,
"acc_stderr": 0.004866997110388195,
"acc_norm": 0.8178649671380203,
"acc_norm_stderr": 0.0038516699346338836
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5407407407407407,
"acc_stderr": 0.04304979692464241,
"acc_norm": 0.5407407407407407,
"acc_norm_stderr": 0.04304979692464241
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.618421052631579,
"acc_stderr": 0.039531733777491945,
"acc_norm": 0.618421052631579,
"acc_norm_stderr": 0.039531733777491945
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6415094339622641,
"acc_stderr": 0.029514703583981772,
"acc_norm": 0.6415094339622641,
"acc_norm_stderr": 0.029514703583981772
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7013888888888888,
"acc_stderr": 0.03827052357950756,
"acc_norm": 0.7013888888888888,
"acc_norm_stderr": 0.03827052357950756
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.037657466938651504,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.037657466938651504
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.502127659574468,
"acc_stderr": 0.03268572658667492,
"acc_norm": 0.502127659574468,
"acc_norm_stderr": 0.03268572658667492
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4021164021164021,
"acc_stderr": 0.02525303255499769,
"acc_norm": 0.4021164021164021,
"acc_norm_stderr": 0.02525303255499769
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3492063492063492,
"acc_stderr": 0.04263906892795132,
"acc_norm": 0.3492063492063492,
"acc_norm_stderr": 0.04263906892795132
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6903225806451613,
"acc_stderr": 0.026302774983517414,
"acc_norm": 0.6903225806451613,
"acc_norm_stderr": 0.026302774983517414
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.46798029556650245,
"acc_stderr": 0.03510766597959217,
"acc_norm": 0.46798029556650245,
"acc_norm_stderr": 0.03510766597959217
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.036085410115739666,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.036085410115739666
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124498,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124498
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5692307692307692,
"acc_stderr": 0.02510682066053975,
"acc_norm": 0.5692307692307692,
"acc_norm_stderr": 0.02510682066053975
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.028972648884844267,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.028972648884844267
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.03214536859788639,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.03214536859788639
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.763302752293578,
"acc_stderr": 0.0182240781172991,
"acc_norm": 0.763302752293578,
"acc_norm_stderr": 0.0182240781172991
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896079,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896079
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.0309645179269234,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.0309645179269234
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.729957805907173,
"acc_stderr": 0.028900721906293433,
"acc_norm": 0.729957805907173,
"acc_norm_stderr": 0.028900721906293433
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.031602951437766785,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.031602951437766785
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.0364129708131373,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.0364129708131373
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7107438016528925,
"acc_stderr": 0.041391127276354626,
"acc_norm": 0.7107438016528925,
"acc_norm_stderr": 0.041391127276354626
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.04524596007030048,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.04524596007030048
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.03487825168497892,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.03487825168497892
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8418803418803419,
"acc_stderr": 0.02390232554956039,
"acc_norm": 0.8418803418803419,
"acc_norm_stderr": 0.02390232554956039
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.047258156262526094,
"acc_norm": 0.67,
"acc_norm_stderr": 0.047258156262526094
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7905491698595147,
"acc_stderr": 0.014551310568143714,
"acc_norm": 0.7905491698595147,
"acc_norm_stderr": 0.014551310568143714
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879716,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.28938547486033517,
"acc_stderr": 0.01516654455049031,
"acc_norm": 0.28938547486033517,
"acc_norm_stderr": 0.01516654455049031
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6862745098039216,
"acc_stderr": 0.02656892101545715,
"acc_norm": 0.6862745098039216,
"acc_norm_stderr": 0.02656892101545715
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409828,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409828
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.42907801418439717,
"acc_stderr": 0.029525914302558555,
"acc_norm": 0.42907801418439717,
"acc_norm_stderr": 0.029525914302558555
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.423728813559322,
"acc_stderr": 0.012620785155885996,
"acc_norm": 0.423728813559322,
"acc_norm_stderr": 0.012620785155885996
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5845588235294118,
"acc_stderr": 0.02993534270787774,
"acc_norm": 0.5845588235294118,
"acc_norm_stderr": 0.02993534270787774
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.630718954248366,
"acc_stderr": 0.019524316744866353,
"acc_norm": 0.630718954248366,
"acc_norm_stderr": 0.019524316744866353
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.04582004841505417,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.04582004841505417
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.02853556033712845,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.02853556033712845
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7910447761194029,
"acc_stderr": 0.028748298931728655,
"acc_norm": 0.7910447761194029,
"acc_norm_stderr": 0.028748298931728655
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608763,
"mc2": 0.4312192117351884,
"mc2_stderr": 0.01428070620058338
},
"harness|winogrande|5": {
"acc": 0.771112865035517,
"acc_stderr": 0.011807360224025402
},
"harness|gsm8k|5": {
"acc": 0.35178165276724793,
"acc_stderr": 0.013153446023536028
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
aitamilnadu/marabutamilvenba | ---
license: gpl-3.0
---
தமிழ் வெண்பாக்கள் ~5000, பதவுரை குறிப்புரையுடன்.
நாலடியார்
பதினெண்கீழ்க்கணக்கு
பக்தி இலக்கியங்கள் |
Eitanli/github-issues | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 2600208
num_examples: 215
download_size: 683347
dataset_size: 2600208
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: github_issues
size_categories:
- 1K<n<10K
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shamotskyi/lmes_catsbin | ---
configs:
- config_name: default
data_files:
- split: train
path: DoAllWordsBelongToCatTask.jsonl
- split: fewshot
path: DoAllWordsBelongToCatTask-fewshot.jsonl
language:
- uk
size_categories:
- 1K<n<10K
license: cc-by-nc-4.0
annotations_creators:
- machine-generated
multilinguality:
- monolingual
#task_ids:
#- multiple-choice-qa
---
# Dataset Card for LMES-cats_bin (Eval-UA-tion benchmark)
This dataset (described in paper **TODO**) part of the LMES (LMentry-static-UA) set of tasks of the Eval-UA-tion benchmark, which aims to evaluate (L)LMs' Ukrainian language skills.
The LMES dataset is inspired by the (awesome!) LMentry benchmark ([aviaefrat/lmentry](https://github.com/aviaefrat/lmentry/)).
LMES-cats_bin asks whether all of the listed words belong to a certain category or not. |
mb23/cvt1_GS3_1 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2139623378.875
num_examples: 13257
- name: test
num_bytes: 745774671.875
num_examples: 4721
download_size: 2721265703
dataset_size: 2885398050.75
---
# Dataset Card for "cvt1_GS3_1"
* ヒストグラム平坦化をGraySpectrogram3に適用
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_196 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1100976964
num_examples: 216217
download_size: 1121153333
dataset_size: 1100976964
---
# Dataset Card for "chunk_196"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SminC/pokemon_caption_data | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: colored_image
dtype: image
splits:
- name: train
num_bytes: 25225724.0
num_examples: 303
download_size: 25174197
dataset_size: 25225724.0
---
# Dataset Card for "pokemon_caption_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NatarajanRayi/animal-detection | ---
license: openrail
---
|
sazirarrwth99/repo_A_refined | ---
dataset_info:
features:
- name: id
dtype: string
- name: old_id
dtype: string
- name: text
dtype: string
- name: category
dtype: string
- name: size
dtype: string
- name: shape
dtype: string
- name: shape_type
dtype: string
- name: triplets
dtype: string
- name: question_entities
dtype: string
- name: superclasses
dtype: string
- name: triplets_subgraph
dtype: string
- name: superclasses_new_entities
dtype: string
- name: possible_classes
dtype: string
- name: possible_classes_no_comment
dtype: string
- name: possible_object_properties
dtype: string
- name: possible_object_properties_no_comment
dtype: string
- name: possible_data_properties
dtype: string
- name: possible_data_properties_no_comment
dtype: string
splits:
- name: train
num_bytes: 8708774
num_examples: 3009
download_size: 2296219
dataset_size: 8708774
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JohnTan38/sparrow-image-v1 | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 54752.0
num_examples: 1
download_size: 49269
dataset_size: 54752.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
domebacsi/speechcorpus | ---
license: apache-2.0
---
|
CyberHarem/athena_asamiya_thekingoffighters | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of athena_asamiya_thekingoffighters
This is the dataset of athena_asamiya_thekingoffighters, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.