id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA | open-llm-leaderboard | 2023-09-22T21:37:15Z | 129 | 0 | null | [
"region:us"
] | 2023-09-22T21:37:15Z | 2023-09-22T21:36:15.000Z | 2023-09-22T21:36:15 | ---
pretty_name: Evaluation run of adonlee/LLaMA_2_70B_LoRA
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7077096775676626,\n\
\ \"acc_stderr\": 0.030867670314758275,\n \"acc_norm\": 0.7114995822621553,\n\
\ \"acc_norm_stderr\": 0.030836833292351554,\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n\
\ \"mc2_stderr\": 0.014753028795637621\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6902730375426621,\n \"acc_stderr\": 0.013512058415238361,\n\
\ \"acc_norm\": 0.726962457337884,\n \"acc_norm_stderr\": 0.013019332762635743\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6886078470424218,\n\
\ \"acc_stderr\": 0.004621163476949205,\n \"acc_norm\": 0.8755228042222665,\n\
\ \"acc_norm_stderr\": 0.003294504807555228\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.041539484047424,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.041539484047424\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.8223684210526315,\n \"acc_stderr\": 0.03110318238312338,\n\
\ \"acc_norm\": 0.8223684210526315,\n \"acc_norm_stderr\": 0.03110318238312338\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n\
\ \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7358490566037735,\n \"acc_stderr\": 0.02713429162874171,\n\
\ \"acc_norm\": 0.7358490566037735,\n \"acc_norm_stderr\": 0.02713429162874171\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n\
\ \"acc_stderr\": 0.03167473383795718,\n \"acc_norm\": 0.8263888888888888,\n\
\ \"acc_norm_stderr\": 0.03167473383795718\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n\
\ \"acc_stderr\": 0.03514942551267439,\n \"acc_norm\": 0.6936416184971098,\n\
\ \"acc_norm_stderr\": 0.03514942551267439\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.7106382978723405,\n \"acc_stderr\": 0.02964400657700962,\n\
\ \"acc_norm\": 0.7106382978723405,\n \"acc_norm_stderr\": 0.02964400657700962\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.04043461861916746,\n\
\ \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.04043461861916746\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47619047619047616,\n \"acc_stderr\": 0.02572209706438853,\n \"\
acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.02572209706438853\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823078,\n \"\
acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823078\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5714285714285714,\n \"acc_stderr\": 0.034819048444388045,\n \"\
acc_norm\": 0.5714285714285714,\n \"acc_norm_stderr\": 0.034819048444388045\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\"\
: 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n\
\ \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.898989898989899,\n \"acc_stderr\": 0.021469735576055343,\n \"\
acc_norm\": 0.898989898989899,\n \"acc_norm_stderr\": 0.021469735576055343\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.0180883938390789,\n\
\ \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.0180883938390789\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7102564102564103,\n \"acc_stderr\": 0.023000628243687968,\n\
\ \"acc_norm\": 0.7102564102564103,\n \"acc_norm_stderr\": 0.023000628243687968\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7815126050420168,\n \"acc_stderr\": 0.02684151432295893,\n \
\ \"acc_norm\": 0.7815126050420168,\n \"acc_norm_stderr\": 0.02684151432295893\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"\
acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9009174311926605,\n \"acc_stderr\": 0.01280978008187893,\n \"\
acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.01280978008187893\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5833333333333334,\n \"acc_stderr\": 0.033622774366080424,\n \"\
acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.033622774366080424\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9019607843137255,\n \"acc_stderr\": 0.0208711184555521,\n \"acc_norm\"\
: 0.9019607843137255,\n \"acc_norm_stderr\": 0.0208711184555521\n },\n\
\ \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\":\
\ 0.8818565400843882,\n \"acc_stderr\": 0.02101105265987847,\n \"\
acc_norm\": 0.8818565400843882,\n \"acc_norm_stderr\": 0.02101105265987847\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n\
\ \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n\
\ \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.031545216720054725,\n\
\ \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.031545216720054725\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035202,\n \"\
acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035202\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n\
\ \"acc_stderr\": 0.035207039905179635,\n \"acc_norm\": 0.8425925925925926,\n\
\ \"acc_norm_stderr\": 0.035207039905179635\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8466257668711656,\n \"acc_stderr\": 0.0283116014414386,\n\
\ \"acc_norm\": 0.8466257668711656,\n \"acc_norm_stderr\": 0.0283116014414386\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n\
\ \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n\
\ \"acc_stderr\": 0.01831589168562585,\n \"acc_norm\": 0.9145299145299145,\n\
\ \"acc_norm_stderr\": 0.01831589168562585\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n\
\ \"acc_stderr\": 0.012036729568216054,\n \"acc_norm\": 0.8697318007662835,\n\
\ \"acc_norm_stderr\": 0.012036729568216054\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7687861271676301,\n \"acc_stderr\": 0.022698657167855713,\n\
\ \"acc_norm\": 0.7687861271676301,\n \"acc_norm_stderr\": 0.022698657167855713\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.646927374301676,\n\
\ \"acc_stderr\": 0.01598420454526858,\n \"acc_norm\": 0.646927374301676,\n\
\ \"acc_norm_stderr\": 0.01598420454526858\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.024739981355113592,\n\
\ \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.024739981355113592\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n\
\ \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n\
\ \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8271604938271605,\n \"acc_stderr\": 0.02103851777015737,\n\
\ \"acc_norm\": 0.8271604938271605,\n \"acc_norm_stderr\": 0.02103851777015737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.599290780141844,\n \"acc_stderr\": 0.029233465745573096,\n \
\ \"acc_norm\": 0.599290780141844,\n \"acc_norm_stderr\": 0.029233465745573096\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5814863102998696,\n\
\ \"acc_stderr\": 0.012599505608336482,\n \"acc_norm\": 0.5814863102998696,\n\
\ \"acc_norm_stderr\": 0.012599505608336482\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.026917481224377204,\n\
\ \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.026917481224377204\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856933,\n \
\ \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856933\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8081632653061225,\n \"acc_stderr\": 0.025206963154225395,\n\
\ \"acc_norm\": 0.8081632653061225,\n \"acc_norm_stderr\": 0.025206963154225395\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n\
\ \"acc_stderr\": 0.023335401790166323,\n \"acc_norm\": 0.8756218905472637,\n\
\ \"acc_norm_stderr\": 0.023335401790166323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n\
\ \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6451679386365279,\n\
\ \"mc2_stderr\": 0.014753028795637621\n }\n}\n```"
repo_url: https://huggingface.co/adonlee/LLaMA_2_70B_LoRA
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-22T21-35-51.410251.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-22T21-35-51.410251.parquet'
- config_name: results
data_files:
- split: 2023_09_22T21_35_51.410251
path:
- results_2023-09-22T21-35-51.410251.parquet
- split: latest
path:
- results_2023-09-22T21-35-51.410251.parquet
---
# Dataset Card for Evaluation run of adonlee/LLaMA_2_70B_LoRA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/adonlee/LLaMA_2_70B_LoRA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [adonlee/LLaMA_2_70B_LoRA](https://huggingface.co/adonlee/LLaMA_2_70B_LoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T21:35:51.410251](https://huggingface.co/datasets/open-llm-leaderboard/details_adonlee__LLaMA_2_70B_LoRA/blob/main/results_2023-09-22T21-35-51.410251.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7077096775676626,
"acc_stderr": 0.030867670314758275,
"acc_norm": 0.7114995822621553,
"acc_norm_stderr": 0.030836833292351554,
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
},
"harness|arc:challenge|25": {
"acc": 0.6902730375426621,
"acc_stderr": 0.013512058415238361,
"acc_norm": 0.726962457337884,
"acc_norm_stderr": 0.013019332762635743
},
"harness|hellaswag|10": {
"acc": 0.6886078470424218,
"acc_stderr": 0.004621163476949205,
"acc_norm": 0.8755228042222665,
"acc_norm_stderr": 0.003294504807555228
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047424,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8223684210526315,
"acc_stderr": 0.03110318238312338,
"acc_norm": 0.8223684210526315,
"acc_norm_stderr": 0.03110318238312338
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7358490566037735,
"acc_stderr": 0.02713429162874171,
"acc_norm": 0.7358490566037735,
"acc_norm_stderr": 0.02713429162874171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795718,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795718
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.03514942551267439,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.03514942551267439
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7106382978723405,
"acc_stderr": 0.02964400657700962,
"acc_norm": 0.7106382978723405,
"acc_norm_stderr": 0.02964400657700962
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6206896551724138,
"acc_stderr": 0.04043461861916746,
"acc_norm": 0.6206896551724138,
"acc_norm_stderr": 0.04043461861916746
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.02572209706438853,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.02572209706438853
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823078,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823078
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.034819048444388045,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.034819048444388045
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.021469735576055343,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.021469735576055343
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.0180883938390789,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.0180883938390789
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7102564102564103,
"acc_stderr": 0.023000628243687968,
"acc_norm": 0.7102564102564103,
"acc_norm_stderr": 0.023000628243687968
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7815126050420168,
"acc_stderr": 0.02684151432295893,
"acc_norm": 0.7815126050420168,
"acc_norm_stderr": 0.02684151432295893
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.01280978008187893,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.01280978008187893
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.033622774366080424,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.033622774366080424
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9019607843137255,
"acc_stderr": 0.0208711184555521,
"acc_norm": 0.9019607843137255,
"acc_norm_stderr": 0.0208711184555521
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8818565400843882,
"acc_stderr": 0.02101105265987847,
"acc_norm": 0.8818565400843882,
"acc_norm_stderr": 0.02101105265987847
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8473282442748091,
"acc_stderr": 0.031545216720054725,
"acc_norm": 0.8473282442748091,
"acc_norm_stderr": 0.031545216720054725
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8760330578512396,
"acc_stderr": 0.030083098716035202,
"acc_norm": 0.8760330578512396,
"acc_norm_stderr": 0.030083098716035202
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8425925925925926,
"acc_stderr": 0.035207039905179635,
"acc_norm": 0.8425925925925926,
"acc_norm_stderr": 0.035207039905179635
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8466257668711656,
"acc_stderr": 0.0283116014414386,
"acc_norm": 0.8466257668711656,
"acc_norm_stderr": 0.0283116014414386
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.01831589168562585,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.01831589168562585
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8697318007662835,
"acc_stderr": 0.012036729568216054,
"acc_norm": 0.8697318007662835,
"acc_norm_stderr": 0.012036729568216054
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.022698657167855713,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.022698657167855713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.646927374301676,
"acc_stderr": 0.01598420454526858,
"acc_norm": 0.646927374301676,
"acc_norm_stderr": 0.01598420454526858
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8271604938271605,
"acc_stderr": 0.02103851777015737,
"acc_norm": 0.8271604938271605,
"acc_norm_stderr": 0.02103851777015737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.599290780141844,
"acc_stderr": 0.029233465745573096,
"acc_norm": 0.599290780141844,
"acc_norm_stderr": 0.029233465745573096
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5814863102998696,
"acc_stderr": 0.012599505608336482,
"acc_norm": 0.5814863102998696,
"acc_norm_stderr": 0.012599505608336482
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7316176470588235,
"acc_stderr": 0.026917481224377204,
"acc_norm": 0.7316176470588235,
"acc_norm_stderr": 0.026917481224377204
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856933,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856933
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8081632653061225,
"acc_stderr": 0.025206963154225395,
"acc_norm": 0.8081632653061225,
"acc_norm_stderr": 0.025206963154225395
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166323,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6451679386365279,
"mc2_stderr": 0.014753028795637621
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.7196020483970642,
-0.9013001322746277,
0.29263705015182495,
0.1945701390504837,
-0.1572677046060562,
-0.048487309366464615,
0.06189156323671341,
-0.25354236364364624,
0.6325222253799438,
-0.040590882301330566,
-0.4541017711162567,
-0.6897741556167603,
-0.4744943082332611,
0.236947715282... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
insub/imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english | insub | 2023-10-22T08:02:45Z | 129 | 1 | null | [
"arxiv:2305.18290",
"region:us"
] | 2023-10-22T08:02:45Z | 2023-10-22T07:33:43.000Z | 2023-10-22T07:33:43 | ---
dataset_info:
features:
- name: text
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 23573801
num_examples: 25000
- name: test
num_bytes: 23551578
num_examples: 25000
download_size: 28260315
dataset_size: 47125379
---
# Dataset Card for "imdb_prefix20_forDPO_gpt2-large-imdb-FT_siebert_sentiment-roberta-large-english"
# 1. Purpose of creating the dataset
For reproduction of DPO (direct preference optimization) thesis experiments
(https://arxiv.org/abs/2305.18290)
# 2. How data is produced
To reproduce the paper's experimental results, we need (x, chosen, rejected) data.
However, imdb data only contains good or bad reviews, so the data must be readjusted.
## 2.1 prepare imdb data
First, download the imdb data and then remove words after 20 tokens using the gpt2-large tokenizer.
(https://huggingface.co/datasets/imdb)
## 2.2 generate sentence
The gpt2-large model fine-tuned by imdb generates two sentences after input (text).
(https://github.com/eric-mitchell/direct-preference-optimization/issues/28)
(https://drive.google.com/file/d/1ZPlfmfkCindqJfD8eNrl8kwtMJ2f1Nqv/view)
## 2.3 labeling method
Use sentiment bert to label good and bad sentences as (chosen, rejected).
(https://github.com/eric-mitchell/direct-preference-optimization/issues/27)
(https://huggingface.co/siebert/sentiment-roberta-large-english) | [
-0.6095941066741943,
-0.7048621773719788,
0.5415692925453186,
0.3309151232242584,
-0.6227896809577942,
-0.20551282167434692,
-0.16338194906711578,
-0.04969516396522522,
0.19022272527217865,
0.5572649240493774,
-0.695778489112854,
-0.3748711943626404,
-0.6443889737129211,
0.3085736036300659... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bipulai/skillate-solution | bipulai | 2023-11-09T09:44:11Z | 129 | 0 | null | [
"region:us"
] | 2023-11-09T09:44:11Z | 2023-11-09T07:23:28.000Z | 2023-11-09T07:23:28 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JsSparkYyx/NLP524 | JsSparkYyx | 2023-11-17T04:18:21Z | 129 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-17T04:18:21Z | 2023-11-16T07:30:14.000Z | 2023-11-16T07:30:14 | ---
license: apache-2.0
dataset_info:
- config_name: mnli
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 109962823
num_examples: 392702
- name: test
num_bytes: 5527941
num_examples: 19643
- name: valid
num_bytes: 5548772
num_examples: 19647
download_size: 53460884
dataset_size: 121039536
- config_name: qnli
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 39384063
num_examples: 104743
- name: test
num_bytes: 2088746
num_examples: 5463
- name: valid
num_bytes: 2086659
num_examples: 5463
download_size: 19044246
dataset_size: 43559468
- config_name: qqp
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 65589038
num_examples: 363846
- name: test
num_bytes: 71200676
num_examples: 390965
- name: valid
num_bytes: 7285839
num_examples: 40430
download_size: 67404067
dataset_size: 144075553
- config_name: sst2
features:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 8730332
num_examples: 67349
- name: test
num_bytes: 327721
num_examples: 1821
- name: valid
num_bytes: 158588
num_examples: 872
download_size: 3370766
dataset_size: 9216641
configs:
- config_name: mnli
data_files:
- split: train
path: mnli/train-*
- split: test
path: mnli/test-*
- split: valid
path: mnli/valid-*
- config_name: qnli
data_files:
- split: train
path: qnli/train-*
- split: test
path: qnli/test-*
- split: valid
path: qnli/valid-*
- config_name: qqp
data_files:
- split: train
path: qqp/train-*
- split: test
path: qqp/test-*
- split: valid
path: qqp/valid-*
- config_name: sst2
data_files:
- split: train
path: sst2/train-*
- split: test
path: sst2/test-*
- split: valid
path: sst2/valid-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aatherton2024/heartbeat_images_final_project | aatherton2024 | 2023-11-27T19:20:02Z | 129 | 0 | null | [
"region:us"
] | 2023-11-27T19:20:02Z | 2023-11-21T01:13:19.000Z | 2023-11-21T01:13:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': artifact
'1': extrahls
'2': extrastole
'3': murmur
'4': normal
splits:
- name: train
num_bytes: 47013855.033282906
num_examples: 528
- name: test
num_bytes: 12407699.966717096
num_examples: 133
download_size: 59216414
dataset_size: 59421555.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/IndicParaphrase | ai4bharat | 2022-10-13T06:08:55Z | 128 | 1 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
... | 2022-10-13T06:08:55Z | 2022-03-09T11:28:53.000Z | 2022-03-09T11:28:53 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicParaphrase
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-paraphrase-generation
---
# Dataset Card for "IndicParaphrase"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicParaphrase is the paraphrasing dataset released as part of IndicNLG Suite. Each
input is paired with up to 5 references. We create this dataset in eleven
languages including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. The total
size of the dataset is 5.57M.
### Supported Tasks and Leaderboards
**Tasks:** Paraphrase generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One example from the `hi` dataset is given below in JSON format.
```
{
'id': '1',
'input': 'निजी क्षेत्र में प्रदेश की 75 प्रतिशत नौकरियां हरियाणा के युवाओं के लिए आरक्षित की जाएगी।',
'references': ['प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।',
'युवाओं के लिए हरियाणा की सभी प्राइवेट नौकरियों में 75 प्रतिशत आरक्षण लागू किया जाएगा।',
'निजी क्षेत्र में 75 प्रतिशत आरक्षित लागू कर प्रदेश के युवाओं का रोजगार सुनिश्चत किया जाएगा।',
'प्राईवेट कम्पनियों में हरियाणा के नौजवानों को 75 प्रतिशत नौकरियां में आरक्षित की जाएगी।',
'प्रदेश की प्राइवेट फैक्टरियों में 75 फीसदी रोजगार हरियाणा के युवाओं के लिए आरक्षित किए जाएंगे।'],
'target': 'प्रदेश के युवाओं को निजी उद्योगों में 75 प्रतिशत आरक्षण देंगे।'
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `pivot (string)`: English sentence used as the pivot
- `input (string)`: Input sentence
- `references (list of strings)`: Paraphrases of `input`, ordered according to the least n-gram overlap
- `target (string)`: The first reference (most dissimilar paraphrase)
### Data Splits
We first select 10K instances each for the validation and test and put remaining in the training dataset. `Assamese (as)`, due to its low-resource nature, could only be split into validation and test sets with 4,420 examples each.
Individual dataset with train-dev-test example counts are given below:
Language | ISO 639-1 Code |Train | Dev | Test |
--------------|----------------|-------|-----|------|
Assamese | as | - | 4,420 | 4,420 |
Bengali | bn | 890,445 | 10,000 | 10,000 |
Gujarati | gu | 379,202 | 10,000 | 10,000 |
Hindi | hi | 929,507 | 10,000 | 10,000 |
Kannada | kn | 522,148 | 10,000 | 10,000 |
Malayalam | ml |761,933 | 10,000 | 10,000 |
Marathi | mr |406,003 | 10,000 | 10,000 |
Oriya | or | 105,970 | 10,000 | 10,000 |
Punjabi | pa | 266,704 | 10,000 | 10,000 |
Tamil | ta | 497,798 | 10,000 | 10,000 |
Telugu | te | 596,283 | 10,000 | 10,000 |
## Dataset Creation
### Curation Rationale
[More information needed]
### Source Data
[Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
### Contributions
| [
-0.26986879110336304,
-0.5766347050666809,
-0.036479637026786804,
0.49628427624702454,
-0.40251466631889343,
-0.009471495635807514,
-0.6357916593551636,
-0.2490920126438141,
0.28947365283966064,
0.42457711696624756,
-0.5029799342155457,
-0.8017547726631165,
-0.6713603138923645,
0.593494176... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigscience/collaborative_catalog | bigscience | 2022-05-10T20:24:47Z | 128 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-05-10T20:24:47Z | 2022-05-10T19:28:07.000Z | 2022-05-10T19:28:07 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Skelebor/book_titles_and_descriptions_en_clean | Skelebor | 2022-06-28T11:23:46Z | 128 | 1 | null | [
"region:us"
] | 2022-06-28T11:23:46Z | 2022-06-28T10:45:53.000Z | 2022-06-28T10:45:53 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bertin-project/alpaca-spanish | bertin-project | 2023-03-24T11:38:19Z | 128 | 19 | null | [
"task_categories:text-generation",
"language:es",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | 2023-03-24T11:38:19Z | 2023-03-20T11:51:06.000Z | 2023-03-20T11:51:06 | ---
license: cc-by-4.0
language:
- es
tags:
- instruction-finetuning
pretty_name: BERTIN Alpaca Spanish
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 21439975
num_examples: 51942
download_size: 13178075
dataset_size: 21439975
---
# BERTIN Alpaca Spanish
This dataset is a translation to Spanish of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/bertin-project/alpaca-spanish/blob/main/nllb/spa_train.json.gz) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI. | [
-0.5013660192489624,
-0.672335684299469,
0.08625717461109161,
0.6053491830825806,
-0.42176175117492676,
-0.20741763710975647,
-0.018622135743498802,
-0.8362780213356018,
0.7717358469963074,
0.5514686703681946,
-0.8678146600723267,
-0.5570492744445801,
-0.5234256386756897,
0.035855833441019... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_fangloveskari__ORCA_LLaMA_70B_QLoRA | open-llm-leaderboard | 2023-09-23T16:47:43Z | 128 | 0 | null | [
"region:us"
] | 2023-09-23T16:47:43Z | 2023-08-29T08:51:40.000Z | 2023-08-29T08:51:40 | ---
pretty_name: Evaluation run of fangloveskari/ORCA_LLaMA_70B_QLoRA
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fangloveskari/ORCA_LLaMA_70B_QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fangloveskari__ORCA_LLaMA_70B_QLoRA\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-23T16:47:31.229796](https://huggingface.co/datasets/open-llm-leaderboard/details_fangloveskari__ORCA_LLaMA_70B_QLoRA/blob/main/results_2023-09-23T16-47-31.229796.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3109270134228188,\n\
\ \"em_stderr\": 0.004740252668251192,\n \"f1\": 0.47044567953020594,\n\
\ \"f1_stderr\": 0.004325159736671571,\n \"acc\": 0.5600850420632693,\n\
\ \"acc_stderr\": 0.011402883443890944\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.3109270134228188,\n \"em_stderr\": 0.004740252668251192,\n\
\ \"f1\": 0.47044567953020594,\n \"f1_stderr\": 0.004325159736671571\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2835481425322214,\n \
\ \"acc_stderr\": 0.012415070917508125\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8366219415943172,\n \"acc_stderr\": 0.010390695970273764\n\
\ }\n}\n```"
repo_url: https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|arc:challenge|25_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_23T16_47_31.229796
path:
- '**/details_harness|drop|3_2023-09-23T16-47-31.229796.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-23T16-47-31.229796.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_23T16_47_31.229796
path:
- '**/details_harness|gsm8k|5_2023-09-23T16-47-31.229796.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-23T16-47-31.229796.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hellaswag|10_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:51:06.198415.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T08:51:06.198415.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-29T08:51:06.198415.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_23T16_47_31.229796
path:
- '**/details_harness|winogrande|5_2023-09-23T16-47-31.229796.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-23T16-47-31.229796.parquet'
- config_name: results
data_files:
- split: 2023_08_29T08_51_06.198415
path:
- results_2023-08-29T08:51:06.198415.parquet
- split: 2023_09_23T16_47_31.229796
path:
- results_2023-09-23T16-47-31.229796.parquet
- split: latest
path:
- results_2023-09-23T16-47-31.229796.parquet
---
# Dataset Card for Evaluation run of fangloveskari/ORCA_LLaMA_70B_QLoRA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [fangloveskari/ORCA_LLaMA_70B_QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fangloveskari__ORCA_LLaMA_70B_QLoRA",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-23T16:47:31.229796](https://huggingface.co/datasets/open-llm-leaderboard/details_fangloveskari__ORCA_LLaMA_70B_QLoRA/blob/main/results_2023-09-23T16-47-31.229796.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3109270134228188,
"em_stderr": 0.004740252668251192,
"f1": 0.47044567953020594,
"f1_stderr": 0.004325159736671571,
"acc": 0.5600850420632693,
"acc_stderr": 0.011402883443890944
},
"harness|drop|3": {
"em": 0.3109270134228188,
"em_stderr": 0.004740252668251192,
"f1": 0.47044567953020594,
"f1_stderr": 0.004325159736671571
},
"harness|gsm8k|5": {
"acc": 0.2835481425322214,
"acc_stderr": 0.012415070917508125
},
"harness|winogrande|5": {
"acc": 0.8366219415943172,
"acc_stderr": 0.010390695970273764
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.3381405174732208,
-0.6281633377075195,
0.14696674048900604,
0.23115487396717072,
-0.22311700880527496,
0.13095659017562866,
-0.31700247526168823,
-0.2805083990097046,
0.4535764753818512,
0.5469019412994385,
-0.6998346447944641,
-0.9712849259376526,
-0.6180647611618042,
0.152538597583770... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_uni-tianyan__Uni-TianYan | open-llm-leaderboard | 2023-09-18T02:40:22Z | 128 | 0 | null | [
"region:us"
] | 2023-09-18T02:40:22Z | 2023-09-03T12:28:00.000Z | 2023-09-03T12:28:00 | ---
pretty_name: Evaluation run of uni-tianyan/Uni-TianYan
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [uni-tianyan/Uni-TianYan](https://huggingface.co/uni-tianyan/Uni-TianYan) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uni-tianyan__Uni-TianYan\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T02:40:09.826211](https://huggingface.co/datasets/open-llm-leaderboard/details_uni-tianyan__Uni-TianYan/blob/main/results_2023-09-18T02-40-09.826211.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.24486157718120805,\n\
\ \"em_stderr\": 0.004403654691385411,\n \"f1\": 0.39787751677852523,\n\
\ \"f1_stderr\": 0.004155160727794137,\n \"acc\": 0.5222921265482389,\n\
\ \"acc_stderr\": 0.01107896164608613\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.24486157718120805,\n \"em_stderr\": 0.004403654691385411,\n\
\ \"f1\": 0.39787751677852523,\n \"f1_stderr\": 0.004155160727794137\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.221379833206975,\n \
\ \"acc_stderr\": 0.011436000004253518\n },\n \"harness|winogrande|5\":\
\ {\n \"acc\": 0.8232044198895028,\n \"acc_stderr\": 0.010721923287918744\n\
\ }\n}\n```"
repo_url: https://huggingface.co/uni-tianyan/Uni-TianYan
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|arc:challenge|25_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|drop|3_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T02-40-09.826211.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-40-09.826211.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hellaswag|10_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-03T12:27:36.436118.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-03T12:27:36.436118.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_18T02_40_09.826211
path:
- '**/details_harness|winogrande|5_2023-09-18T02-40-09.826211.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T02-40-09.826211.parquet'
- config_name: results
data_files:
- split: 2023_09_03T12_27_36.436118
path:
- results_2023-09-03T12:27:36.436118.parquet
- split: 2023_09_18T02_40_09.826211
path:
- results_2023-09-18T02-40-09.826211.parquet
- split: latest
path:
- results_2023-09-18T02-40-09.826211.parquet
---
# Dataset Card for Evaluation run of uni-tianyan/Uni-TianYan
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uni-tianyan/Uni-TianYan
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [uni-tianyan/Uni-TianYan](https://huggingface.co/uni-tianyan/Uni-TianYan) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uni-tianyan__Uni-TianYan",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T02:40:09.826211](https://huggingface.co/datasets/open-llm-leaderboard/details_uni-tianyan__Uni-TianYan/blob/main/results_2023-09-18T02-40-09.826211.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.24486157718120805,
"em_stderr": 0.004403654691385411,
"f1": 0.39787751677852523,
"f1_stderr": 0.004155160727794137,
"acc": 0.5222921265482389,
"acc_stderr": 0.01107896164608613
},
"harness|drop|3": {
"em": 0.24486157718120805,
"em_stderr": 0.004403654691385411,
"f1": 0.39787751677852523,
"f1_stderr": 0.004155160727794137
},
"harness|gsm8k|5": {
"acc": 0.221379833206975,
"acc_stderr": 0.011436000004253518
},
"harness|winogrande|5": {
"acc": 0.8232044198895028,
"acc_stderr": 0.010721923287918744
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.3071233630180359,
-0.5458738207817078,
0.2006438672542572,
0.2548966705799103,
-0.2224346101284027,
0.15189805626869202,
-0.4062587320804596,
-0.13559098541736603,
0.3601977527141571,
0.6073208451271057,
-0.6192150712013245,
-0.9431413412094116,
-0.6111635565757751,
0.1793980747461319,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
euirim/goodwiki | euirim | 2023-09-11T04:56:26Z | 128 | 21 | null | [
"task_categories:text-generation",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | 2023-09-11T04:56:26Z | 2023-09-09T08:31:30.000Z | 2023-09-09T08:31:30 | ---
license: mit
task_categories:
- text-generation
- summarization
language:
- en
pretty_name: GoodWiki
size_categories:
- 10K<n<100K
---
# GoodWiki Dataset
GoodWiki is a 179 million token dataset of English Wikipedia articles collected on **September 4, 2023**, that have been marked as [Good](https://en.wikipedia.org/wiki/Wikipedia:Good_articles) or [Featured](https://en.wikipedia.org/wiki/Wikipedia:Featured_articles) by Wikipedia editors. The dataset provides these articles in [GitHub-flavored Markdown](https://github.github.com/gfm/) format, preserving layout features like lists, code blocks, math, and block quotes, unlike many other public Wikipedia datasets. Articles are accompanied by a short description of the page as well as any associated categories.
Thanks to a careful conversion process from wikicode, the markup language used by Wikipedia, articles in GoodWiki are generally faithful reproductions of the corresponding original Wikipedia pages, minus references, files, infoboxes, and tables. Curated template transclusion and HTML tag handling have minimized instances where entire words and phrases are missing mid-sentence.
The hope is that this more comprehensive data will play a small role in improving open-source NLP efforts in language modeling, summarization, and instruction tuning.
GoodWiki is more than 1.5 times larger (when compared using the same tokenizer) than the widely used [WikiText-103](https://huggingface.co/datasets/wikitext) dataset by Merity et al., even after excluding article descriptions. Also limited to articles marked as Good or Featured, WikiText inspired GoodWiki.
The code used to build this dataset can be found on [GitHub](https://github.com/euirim/goodwiki).
## Table of Contents
* [Composition](#composition)
* [Languages](#languages)
* [Markdown Details](#markdown-details)
* [Methodology](#methodology)
* [Alternatives Considered](#alternatives-considered)
* [Limitations](#limitations)
* [Future Work](#future-work)
* [License](#license)
* [Citation](#citation)
* [Feedback and Contributions](#feedback-and-contributions)
## Composition
The dataset consists of **44,754 rows** in a **482.7 MB** snappy-compressed Parquet file. Each row consists of the following fields:
* `pageid` (`int64`): The Wikipedia id of the article.
* `title` (`string`): The title of the article.
* `revid` (`int64`): The Wikipedia id of the revision used.
* `description` (`string | null`): Plaintext short description/summary of the article written by Wikipedia contributors.
* `categories` (`list[string]`): The article's Wikipedia categories.
* `markdown` (`string`): The content of the article in GitHub-flavored Markdown format.
Here's an example row in JSON format:
```json
{
"pageid": 40961074,
"title": "Attarsiya",
"revid": 1164804042,
"description": "Military leader of Ahhiya",
"categories": [
"Ancient Anatolia",
"Greek military leaders",
"Mycenaean Greeks"
],
"markdown": "Attarsiya was a 15th–14th century BCE military leader of Ahhiya. In the Hittite archives of circa 1400 BCE, he is described as a \"man of Ahhiya\", a country identified with the Achaeans and Mycenaean Greece. The campaigns of Attarsiya, as well as his conflict with the Hittite vassal, Madduwatta, represent the first recorded Mycenaean Greek military activity on the Anatolian mainland, as well as the first conflict between Achaeans and Hittites...",
}
```
The markdown field contains a total of **179,198,101 tokens** tokenized using HuggingFace's pretrained `facebook/opt-350m` tokenizer. It also contains **811,791,686 characters** and **132,691,055 words**.
Even with the markdown formatting, GoodWiki can also be used as a plaintext dataset as markdown formatting syntax is fairly minimal.
### Languages
While articles are taken exclusively from English Wikipedia, they sometimes contain small snippets from other languages as well as recurring use of the [International Phonetic Alphabet](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) in article ledes. Some articles include code blocks in pseudocode as well as in popular programming languages.
### Markdown Details
GoodWiki articles follow the GitHub-flavored Markdown spec, including for blockquotes, code blocks, and lists. Bolding, italicizing, underlining, and strikethroughs have been removed as they introduce a lot of noise especially in math/computing articles.
Some markdown details are worth highlighting:
#### Math
Content in math templates and XML tags are enclosed in markdown with `$` delimiters. For example,
```xml
<math>O(n^2)</math>
```
becomes: `$O(n^2)$`.
#### Super/Subscript
Superscripts and subscripts are denoted using `<sup></sup>` and `<sub></sub>` tags respectively.
#### \$ and \#
Dollar signs and hashes are escaped with `\` to avoid interfering with math and heading syntax.
## Methodology
On the evening of September 4, 2023 PT, we downloaded the wikicode of articles associated with the [Good](https://en.wikipedia.org/wiki/Category:Good_articles) and [Featured](https://en.wikipedia.org/wiki/Category:Featured_articles) categories in the main namespace (`ns=0`) on Wikipedia via the [Query API](https://www.mediawiki.org/wiki/API:Query).
After some preprocessing including removing comments, applying magic code, and removing unrecognized or unnecessary template tags, we sent the resulting code to Wikipedia's [Expandtemplates API](https://www.mediawiki.org/wiki/API:Expandtemplates). This endpoint [transcludes](https://en.wikipedia.org/wiki/Help:Transclusion) template tags, turning them into HTML and plaintext. We chose the templates to transclude by counting all the templates used across the dataset and selecting the ones that are not rare, not used for citations, and not used for asides like infoboxes and tables.
The Expandtemplates output is then postprocessed. During this phase, we remove sections associated with references (e.g. `Sources Cited`), extract text from wikilinks and external links, delete media links, and handle [HTML tags](https://en.wikipedia.org/wiki/Help:HTML_in_wikitext). The postprocessed output is then converted to GitHub-flavored Markdown using [Pandoc](https://pandoc.org/). We also discarded articles detected by Pandoc to have corrupt wikicode (`n=125`).
The markdown output is then cleaned using regular expressions to remove excessive spacing, empty list items, unnecessary escaping, and resolve other problems with Pandoc's conversion. We normalized the markdown output unicode to a composed form (NFKC).
### Alternatives Considered
#### Converting End-To-End Using Pandoc
While Pandoc can in theory convert raw wikicode to markdown, it is **not** a complete wikicode parser and therefore often produces errant output without preprocessing. Furthermore, direct conversion of raw wikicode would lose a lot of the content attached to wikicode templates as Pandoc cannot perform transclusion.
#### Using TextExtracts API
Wikipedia has a [TextExtracts](https://www.mediawiki.org/wiki/Extension:TextExtracts#API) API that directly outputs a limited HTML or plaintext output of a page given that page's title. In practice, I've found the HTML output generated by this endpoint to often contain malformed or incomplete HTML with injected references that are difficult to parse. The plaintext output was also often poor, including reference artifacts and missing content.
Other caveats are listed [here](https://www.mediawiki.org/wiki/Extension:TextExtracts#API) and were the reasons why this approach was discarded.
#### Transcluding All Templates
During the preprocessing process, we eliminate templates outside of a given subset. We did this because we found that transcluding all templates injected a lot of noise in the output, including janky HTML, styles, references, and unnecessary content. This noise made parsing difficult and error-prone, resulting in poor quality markdown littered with artifacts similar to those visible in the TextExtracts output.
Transcluding a subset largely solved these issues while still preserving as much content as possible.
## Limitations
* Chemical equations sometimes include formatting issues like unnecessary line-breaks. These equations, however, are rare.
* In articles about ancient civilizations and languages, rare Unicode characters are occasionally included in the markdown. It might be worth removing these characters during the tokenization process.
* In rare cases, book/article names may be missing from the markdown as they are considered citations in the wikicode.
* Inflation data is missing from some articles. These articles use the `Inflation` template tag to include this information, which works poorly with the Extracttemplates API.
* Articles may feature empty sections due to table/box removal.
* Some code blocks are denoted using indents instead of formal code blocks. This is due to the original wikicode not denoting them as such.
* Template subset allowing transclusion will probably need to be updated for use in future data dumps. The list of templates used on Wikipedia is constantly evolving.
## Future Work
Time permitting, we hope to apply this careful conversion/generation process on all of English Wikipedia which will require our conversion script to be much faster and better parallelized. We also hope to extract other information from pages like entries in infoboxes that could be useful for question answering and instruction tuning applications.
If you're interested in helping out, please reach out!
## License
The dataset and accompanying [code](https://github.com/euirim/goodwiki) are licensed under an **MIT license**. Pandoc, which must be downloaded separately, is GPL-licensed.
While this project is permissively licensed, we hope that you contribute any improvements you make to this dataset.
## Citation
If you use the GoodWiki Dataset in your research or projects, please cite it using the following citation:
```tex
@misc{GoodWiki,
title = {GoodWiki Dataset},
author = {Choi, Euirim},
howpublished = {\url{https://www.github.com/euirim/goodwiki}},
month = {September},
year = {2023}
}
```
## Feedback and Contributions
Contributions via pull requests and discussions are welcome. If you don't know how you could help improve this project, please look at the [Future Work](#future-work) section.
Was this dataset useful for your work? Please let us know. We'd love to feature your project :) | [
-0.8225212097167969,
-0.5561835765838623,
0.1562872976064682,
-0.04861767590045929,
-0.3102693259716034,
-0.2774726450443268,
-0.4495033323764801,
-0.4003135561943054,
0.5580825805664062,
0.2986796200275421,
-0.43238162994384766,
-0.5555458068847656,
-0.4401637613773346,
0.4150876998901367... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RIW/small-coco-wm_50_2 | RIW | 2023-10-08T03:32:30Z | 128 | 0 | null | [
"region:us"
] | 2023-10-08T03:32:30Z | 2023-10-08T03:30:37.000Z | 2023-10-08T03:30:37 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: 'null'
- name: width
dtype: int64
- name: height
dtype: int64
- name: original_width
dtype: int64
- name: original_height
dtype: int64
- name: exif
dtype: string
- name: sha256
dtype: string
splits:
- name: train
num_bytes: 781729596.182
num_examples: 8362
- name: validation
num_bytes: 851865993.632
num_examples: 8514
download_size: 554825307
dataset_size: 1633595589.8140001
---
# Dataset Card for "small-coco-wm_50_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6822188496589661,
-0.2445274442434311,
0.1391795426607132,
0.3209247887134552,
-0.27214282751083374,
0.036518532782793045,
0.00260726734995842,
-0.24320858716964722,
0.8794689774513245,
0.3877883851528168,
-0.8710180521011353,
-0.6091702580451965,
-0.6536642909049988,
-0.151834934949874... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codesignal/wine-quality | codesignal | 2023-10-14T14:10:21Z | 128 | 0 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-10-14T14:10:21Z | 2023-10-14T05:13:00.000Z | 2023-10-14T05:13:00 | ---
license: cc-by-4.0
language:
- en
pretty_name: Wine Quality
size_categories:
- 1K<n<10K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataProvenanceInitiative/cot_submix_original | DataProvenanceInitiative | 2023-10-16T17:31:56Z | 128 | 0 | null | [
"region:us"
] | 2023-10-16T17:31:56Z | 2023-10-16T17:31:52.000Z | 2023-10-16T17:31:52 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 209004809
num_examples: 183848
download_size: 100293074
dataset_size: 209004809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cot_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6877744197845459,
-0.1753116250038147,
0.037254221737384796,
0.2491912543773651,
-0.5308089256286621,
0.151519313454628,
0.285266637802124,
0.0824112668633461,
0.9574611186981201,
0.6875176429748535,
-0.9364620447158813,
-0.7334725856781006,
-0.7538391947746277,
-0.21101056039333344,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SM200203102097/skinDiseasesDetectionModel | SM200203102097 | 2023-10-20T12:19:36Z | 128 | 1 | null | [
"region:us"
] | 2023-10-20T12:19:36Z | 2023-10-20T12:17:14.000Z | 2023-10-20T12:17:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Actinic_keratoses
'1': Basal_cell_carcinoma
'2': Benign_keratosis
'3': Dermatofibroma
'4': Melanocytic_nevi
'5': Melanoma
'6': Vascular_lesions
splits:
- name: train
num_bytes: 1918967282.53
num_examples: 11865
download_size: 2809338083
dataset_size: 1918967282.53
---
# Dataset Card for "skinDiseasesDetectionModel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3945610523223877,
-0.21885059773921967,
-0.07857179641723633,
0.16895203292369843,
-0.24748018383979797,
0.13274946808815002,
0.3682953417301178,
-0.34567779302597046,
0.6816023588180542,
0.743589460849762,
-0.8947026133537292,
-0.9737508893013,
-0.4540003538131714,
-0.4149000346660614,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sahil2801/chess_test | sahil2801 | 2023-11-13T16:27:12Z | 128 | 1 | null | [
"region:us"
] | 2023-11-13T16:27:12Z | 2023-11-11T12:49:05.000Z | 2023-11-11T12:49:05 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Gbssreejith/sample_dataset | Gbssreejith | 2023-11-22T04:57:20Z | 128 | 0 | null | [
"region:us"
] | 2023-11-22T04:57:20Z | 2023-11-22T04:57:05.000Z | 2023-11-22T04:57:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 9399089.0
num_examples: 131
- name: validation
num_bytes: 1146660.0
num_examples: 16
- name: test
num_bytes: 1247924.0
num_examples: 17
download_size: 11050883
dataset_size: 11793673.0
---
# Dataset Card for "sample_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5997641086578369,
-0.2540747821331024,
0.14794471859931946,
0.09100141376256943,
-0.2789721190929413,
0.009223884902894497,
0.2817455530166626,
-0.12582305073738098,
0.913913905620575,
0.4068566560745239,
-0.9456608891487122,
-0.7595519423484802,
-0.5293059945106506,
-0.2124529331922531... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_vi-VN | SetFit | 2022-05-06T09:12:04Z | 127 | 0 | null | [
"region:us"
] | 2022-05-06T09:12:04Z | 2022-05-06T09:12:01.000Z | 2022-05-06T09:12:01 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/hellaswag | AlekseyKorshuk | 2022-06-06T10:33:23Z | 127 | 2 | null | [
"region:us"
] | 2022-06-06T10:33:23Z | 2022-06-06T10:33:09.000Z | 2022-06-06T10:33:09 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erickdp/kdataset | erickdp | 2022-06-14T14:50:01Z | 127 | 0 | null | [
"region:us"
] | 2022-06-14T14:50:01Z | 2022-06-14T14:49:54.000Z | 2022-06-14T14:49:54 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
s3prl/SNIPS | s3prl | 2022-06-25T07:05:22Z | 127 | 0 | null | [
"license:mit",
"region:us"
] | 2022-06-25T07:05:22Z | 2022-06-17T05:40:35.000Z | 2022-06-17T05:40:35 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CShorten/Liking-User-Bios | CShorten | 2022-06-28T12:17:15Z | 127 | 0 | null | [
"region:us"
] | 2022-06-28T12:17:15Z | 2022-06-27T13:08:43.000Z | 2022-06-27T13:08:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SZTAKI-HLT/HunSum-1 | SZTAKI-HLT | 2023-01-24T16:21:00Z | 127 | 3 | null | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"multilinguality:monolingual",
"language:hu",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-01-24T16:21:00Z | 2023-01-06T07:42:26.000Z | 2023-01-06T07:42:26 | ---
language:
- hu
multilinguality:
- monolingual
task_categories:
- summarization
task_ids:
- news-articles-summarization
pretty_name: HunSum-1
license: cc-by-nc-sa-4.0
---
# Dataset Card for HunSum-1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
### Dataset Summary
The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites.
### Supported Tasks and Leaderboards
- 'summarization'
- 'title generation'
## Dataset Structure
### Data Fields
- `uuid`: a string containing the unique id
- `article`: a string containing the body of the news article
- `lead`: a string containing the lead of the article
- `title`: a string containing the title of the article
- `url`: a string containing the URL for the article
- `domain`: a string containing the domain of the url
- `date_of_creation`: a timestamp containing the date when the article was created
- `tags`: a sequence containing the tags of the article
### Data Splits
The HunSum-1 dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 1,144,255 |
| Validation | 1996 |
| Test | 1996 |
## Citation
If you use our dataset, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
``` | [
-0.4123528003692627,
-0.45676755905151367,
0.04548726975917816,
0.17932263016700745,
-0.40923646092414856,
-0.3006645441055298,
-0.20065289735794067,
-0.24885886907577515,
0.3388942778110504,
0.40128496289253235,
-0.49128788709640503,
-1.0631805658340454,
-0.45832139253616333,
0.2531265616... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/StanfordCars_train | Multimodal-Fatima | 2023-06-12T06:26:48Z | 127 | 0 | null | [
"region:us"
] | 2023-06-12T06:26:48Z | 2023-01-28T02:30:01.000Z | 2023-01-28T02:30:01 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': am general hummer suv 2000
'1': acura rl sedan 2012
'2': acura tl sedan 2012
'3': acura tl type-s 2008
'4': acura tsx sedan 2012
'5': acura integra type r 2001
'6': acura zdx hatchback 2012
'7': aston martin v8 vantage convertible 2012
'8': aston martin v8 vantage coupe 2012
'9': aston martin virage convertible 2012
'10': aston martin virage coupe 2012
'11': audi rs 4 convertible 2008
'12': audi a5 coupe 2012
'13': audi tts coupe 2012
'14': audi r8 coupe 2012
'15': audi v8 sedan 1994
'16': audi 100 sedan 1994
'17': audi 100 wagon 1994
'18': audi tt hatchback 2011
'19': audi s6 sedan 2011
'20': audi s5 convertible 2012
'21': audi s5 coupe 2012
'22': audi s4 sedan 2012
'23': audi s4 sedan 2007
'24': audi tt rs coupe 2012
'25': bmw activehybrid 5 sedan 2012
'26': bmw 1 series convertible 2012
'27': bmw 1 series coupe 2012
'28': bmw 3 series sedan 2012
'29': bmw 3 series wagon 2012
'30': bmw 6 series convertible 2007
'31': bmw x5 suv 2007
'32': bmw x6 suv 2012
'33': bmw m3 coupe 2012
'34': bmw m5 sedan 2010
'35': bmw m6 convertible 2010
'36': bmw x3 suv 2012
'37': bmw z4 convertible 2012
'38': bentley continental supersports conv. convertible 2012
'39': bentley arnage sedan 2009
'40': bentley mulsanne sedan 2011
'41': bentley continental gt coupe 2012
'42': bentley continental gt coupe 2007
'43': bentley continental flying spur sedan 2007
'44': bugatti veyron 16.4 convertible 2009
'45': bugatti veyron 16.4 coupe 2009
'46': buick regal gs 2012
'47': buick rainier suv 2007
'48': buick verano sedan 2012
'49': buick enclave suv 2012
'50': cadillac cts-v sedan 2012
'51': cadillac srx suv 2012
'52': cadillac escalade ext crew cab 2007
'53': chevrolet silverado 1500 hybrid crew cab 2012
'54': chevrolet corvette convertible 2012
'55': chevrolet corvette zr1 2012
'56': chevrolet corvette ron fellows edition z06 2007
'57': chevrolet traverse suv 2012
'58': chevrolet camaro convertible 2012
'59': chevrolet hhr ss 2010
'60': chevrolet impala sedan 2007
'61': chevrolet tahoe hybrid suv 2012
'62': chevrolet sonic sedan 2012
'63': chevrolet express cargo van 2007
'64': chevrolet avalanche crew cab 2012
'65': chevrolet cobalt ss 2010
'66': chevrolet malibu hybrid sedan 2010
'67': chevrolet trailblazer ss 2009
'68': chevrolet silverado 2500hd regular cab 2012
'69': chevrolet silverado 1500 classic extended cab 2007
'70': chevrolet express van 2007
'71': chevrolet monte carlo coupe 2007
'72': chevrolet malibu sedan 2007
'73': chevrolet silverado 1500 extended cab 2012
'74': chevrolet silverado 1500 regular cab 2012
'75': chrysler aspen suv 2009
'76': chrysler sebring convertible 2010
'77': chrysler town and country minivan 2012
'78': chrysler 300 srt-8 2010
'79': chrysler crossfire convertible 2008
'80': chrysler pt cruiser convertible 2008
'81': daewoo nubira wagon 2002
'82': dodge caliber wagon 2012
'83': dodge caliber wagon 2007
'84': dodge caravan minivan 1997
'85': dodge ram pickup 3500 crew cab 2010
'86': dodge ram pickup 3500 quad cab 2009
'87': dodge sprinter cargo van 2009
'88': dodge journey suv 2012
'89': dodge dakota crew cab 2010
'90': dodge dakota club cab 2007
'91': dodge magnum wagon 2008
'92': dodge challenger srt8 2011
'93': dodge durango suv 2012
'94': dodge durango suv 2007
'95': dodge charger sedan 2012
'96': dodge charger srt-8 2009
'97': eagle talon hatchback 1998
'98': fiat 500 abarth 2012
'99': fiat 500 convertible 2012
'100': ferrari ff coupe 2012
'101': ferrari california convertible 2012
'102': ferrari 458 italia convertible 2012
'103': ferrari 458 italia coupe 2012
'104': fisker karma sedan 2012
'105': ford f-450 super duty crew cab 2012
'106': ford mustang convertible 2007
'107': ford freestar minivan 2007
'108': ford expedition el suv 2009
'109': ford edge suv 2012
'110': ford ranger supercab 2011
'111': ford gt coupe 2006
'112': ford f-150 regular cab 2012
'113': ford f-150 regular cab 2007
'114': ford focus sedan 2007
'115': ford e-series wagon van 2012
'116': ford fiesta sedan 2012
'117': gmc terrain suv 2012
'118': gmc savana van 2012
'119': gmc yukon hybrid suv 2012
'120': gmc acadia suv 2012
'121': gmc canyon extended cab 2012
'122': geo metro convertible 1993
'123': hummer h3t crew cab 2010
'124': hummer h2 sut crew cab 2009
'125': honda odyssey minivan 2012
'126': honda odyssey minivan 2007
'127': honda accord coupe 2012
'128': honda accord sedan 2012
'129': hyundai veloster hatchback 2012
'130': hyundai santa fe suv 2012
'131': hyundai tucson suv 2012
'132': hyundai veracruz suv 2012
'133': hyundai sonata hybrid sedan 2012
'134': hyundai elantra sedan 2007
'135': hyundai accent sedan 2012
'136': hyundai genesis sedan 2012
'137': hyundai sonata sedan 2012
'138': hyundai elantra touring hatchback 2012
'139': hyundai azera sedan 2012
'140': infiniti g coupe ipl 2012
'141': infiniti qx56 suv 2011
'142': isuzu ascender suv 2008
'143': jaguar xk xkr 2012
'144': jeep patriot suv 2012
'145': jeep wrangler suv 2012
'146': jeep liberty suv 2012
'147': jeep grand cherokee suv 2012
'148': jeep compass suv 2012
'149': lamborghini reventon coupe 2008
'150': lamborghini aventador coupe 2012
'151': lamborghini gallardo lp 570-4 superleggera 2012
'152': lamborghini diablo coupe 2001
'153': land rover range rover suv 2012
'154': land rover lr2 suv 2012
'155': lincoln town car sedan 2011
'156': mini cooper roadster convertible 2012
'157': maybach landaulet convertible 2012
'158': mazda tribute suv 2011
'159': mclaren mp4-12c coupe 2012
'160': mercedes-benz 300-class convertible 1993
'161': mercedes-benz c-class sedan 2012
'162': mercedes-benz sl-class coupe 2009
'163': mercedes-benz e-class sedan 2012
'164': mercedes-benz s-class sedan 2012
'165': mercedes-benz sprinter van 2012
'166': mitsubishi lancer sedan 2012
'167': nissan leaf hatchback 2012
'168': nissan nv passenger van 2012
'169': nissan juke hatchback 2012
'170': nissan 240sx coupe 1998
'171': plymouth neon coupe 1999
'172': porsche panamera sedan 2012
'173': ram c/v cargo van minivan 2012
'174': rolls-royce phantom drophead coupe convertible 2012
'175': rolls-royce ghost sedan 2012
'176': rolls-royce phantom sedan 2012
'177': scion xd hatchback 2012
'178': spyker c8 convertible 2009
'179': spyker c8 coupe 2009
'180': suzuki aerio sedan 2007
'181': suzuki kizashi sedan 2012
'182': suzuki sx4 hatchback 2012
'183': suzuki sx4 sedan 2012
'184': tesla model s sedan 2012
'185': toyota sequoia suv 2012
'186': toyota camry sedan 2012
'187': toyota corolla sedan 2012
'188': toyota 4runner suv 2012
'189': volkswagen golf hatchback 2012
'190': volkswagen golf hatchback 1991
'191': volkswagen beetle hatchback 2012
'192': volvo c30 hatchback 2012
'193': volvo 240 sedan 1993
'194': volvo xc90 suv 2007
'195': smart fortwo convertible 2012
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_stanfordcars
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: train
num_bytes: 1016273762.0
num_examples: 8144
download_size: 991440998
dataset_size: 1016273762.0
---
# Dataset Card for "StanfordCars_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5765655040740967,
-0.02484332025051117,
0.2848576009273529,
0.5238633155822754,
-0.15133577585220337,
-0.15683114528656006,
0.15626047551631927,
-0.13496005535125732,
0.5280023813247681,
0.3066977858543396,
-0.9239703416824341,
-0.5214220285415649,
-0.32434701919555664,
-0.4467590451240... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish | Lajonbot | 2023-05-09T05:27:32Z | 127 | 5 | null | [
"region:us"
] | 2023-05-09T05:27:32Z | 2023-05-09T05:27:30.000Z | 2023-05-09T05:27:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kotzeje/lamini_docs.jsonl | kotzeje | 2023-08-24T12:35:32Z | 127 | 2 | null | [
"region:us"
] | 2023-08-24T12:35:32Z | 2023-08-24T12:35:29.000Z | 2023-08-24T12:35:29 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 573589
num_examples: 1400
download_size: 283465
dataset_size: 573589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lamini_docs.jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6043457388877869,
-0.23693659901618958,
0.2851744592189789,
0.11466295272111893,
-0.22390633821487427,
0.14216329157352448,
0.14665575325489044,
0.10313709825277328,
0.5873035192489624,
0.7761560678482056,
-0.8142802119255066,
-0.9195615649223328,
-0.6768949031829834,
-0.170111998915672... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OfekGlick/DiscoEval | OfekGlick | 2023-11-06T14:06:49Z | 127 | 0 | null | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:bsd",
"Discourse",
"Discourse Evaluation",
"NLP",
"arxiv:1909.00142",
"region:us"
] | 2023-11-06T14:06:49Z | 2023-09-22T23:22:52.000Z | 2023-09-22T23:22:52 | ---
license: bsd
task_categories:
- text-classification
language:
- en
tags:
- Discourse
- Discourse Evaluation
- NLP
pretty_name: DiscoEval
size_categories:
- 100K<n<1M
---
# DiscoEval Benchmark Datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Sources](#dataset-sources)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Benchmark Creators](#benchmark-creators)
- [Citation Information](#citation-information)
- [Loading Data Examples](#loading-data-examples)
- [Loading Data for Sentence Positioning Task with the Arxiv data source](#loading-data-for-sentence-positioning-task-with-the-arxiv-data-source)
## Dataset Description
- **Repository:** [DiscoEval repository](https://github.com/ZeweiChu/DiscoEval)
- **Paper:** [Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations](https://arxiv.org/pdf/1909.00142)
### Dataset Summary
The DiscoEval is an English-language Benchmark that contains a test suite of 7
tasks to evaluate whether sentence representations include semantic information
relevant to discourse processing. The benchmark datasets offer a collection of
tasks designed to evaluate natural language understanding models in the context
of discourse analysis and coherence.
### Dataset Sources
- **Arxiv**: A repository of scientific papers and research articles.
- **Wikipedia**: An extensive online encyclopedia with articles on diverse topics.
- **Rocstory**: A dataset consisting of fictional stories.
- **Ubuntu IRC channel**: Conversational data extracted from the Ubuntu Internet Relay Chat (IRC) channel.
- **PeerRead**: A dataset of scientific papers frequently used for discourse-related tasks.
- **RST Discourse Treebank**: A dataset annotated with Rhetorical Structure Theory (RST) discourse relations.
- **Penn Discourse Treebank**: Another dataset with annotated discourse relations, facilitating the study of discourse structure.
### Supported Tasks
1. **Sentence Positioning**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determine the correct placement of a sentence within a given context of five sentences. To form the input when training classifiers encode the five sentences to vector representations \\(x_i\\). As input to the classfier we include \\(x_1\\) and the contcatination of \\(x_1 - x_i\\) for all \\(i\\): \\([x_1, x_1 - x_2, x_1-x_3,x_1-x_4,x_1-x_5]\\)
2. **Binary Sentence Ordering**
- **Datasets Sources**: Arxiv, Wikipedia, Rocstory
- **Description**: Determining whether two sentences are in the correct consecutive order, identifying the more coherent structure. To form the input when training classifiers, we concatenate the embeddings of both sentences with their element-wise difference: \\([x_1, x_2, x_1-x_2]\\)
3. **Discourse Coherence**
- **Datasets Sources**: Ubuntu IRC channel, Wikipedia
- **Description**: Determine whether a sequence of six sentences form a coherent paragraph. To form the input when training classifiers, encode all sentences to vector representations and concatenate all of them: \\([x_1, x_2, x_3, x_4, x_5, x_6]\\)
4. **Sentence Section Prediction**
- **Datasets Sources**: Constructed from PeerRead
- **Description**: Determine the section or category to which a sentence belongs within a scientific paper, based on the content and context. To form the input when training classifiers, simply input the sentence embedding.
5. **Discourse Relations**
- **Datasets Sources**: RST Discourse Treebank, Penn Discourse Treebank
- **Description**: Identify and classify discourse relations between sentences or text segments, helping to reveal the structure and flow of discourse. To form the input when training classifiers, refer to the [original paper](https://arxiv.org/pdf/1909.00142) for instructions
### Languages
The text in all datasets is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
All tasks are classification tasks, and they differ by the number of sentences per example and the type of label.
An example from the Sentence Positioning task would look as follows:
```
{'sentence_1': 'Dan was overweight as well.',
'sentence_2': 'Dan's parents were overweight.',
'sentence_3': 'The doctors told his parents it was unhealthy.',
'sentence_4': 'His parents understood and decided to make a change.',
'sentence_5': 'They got themselves and Dan on a diet.'
'label': '1'
}
```
The label is '1' since the first sentence should go at position number 1 (counting from zero)
An example from the Binary Sentence Ordering task would look as follows:
```
{'sentence_1': 'When she walked in, she felt awkward.',
'sentence_2': 'Janet decided to go to her high school's party.',
'label': '0'
}
```
The label is '0' because this is not the correct order of the sentences. It should be sentence_2 and then sentence_1.
For more examples, you can refer the [original paper]((https://arxiv.org/pdf/1909.00142).
### Data Fields
In this benchmark, all data fields are string, including the labels.
### Data Splits
The data is split into training, validation and test set for each of the tasks in the benchmark.
| Task and Dataset | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| Sentence Positioning: Arxiv| 10000 | 4000 | 4000|
| Sentence Positioning: Rocstory| 10000 | 4000 | 4000|
| Sentence Positioning: Wiki| 10000 | 4000 | 4000|
| Binary Sentence Ordering: Arxiv| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Rocstory| 20000 | 8000 | 8000|
| Binary Sentence Ordering: Wiki| 20000 | 8000 | 8000|
| Discourse Coherence: Chat| 5816 | 1834 | 2418|
| Discourse Coherence: Wiki| 10000 | 4000 | 4000|
| Sentence Section Prediction | 10000 | 4000 | 4000 |
| Discourse Relation: Penn Discourse Tree Bank: Implicit | 8693 | 2972 | 3024 |
| Discourse Relation: Penn Discourse Tree Bank: Explicit | 9383 | 3613 | 3758 |
| Discourse Relation: RST Discourse Tree Bank | 17051 | 2045 | 2308 |
## Additional Information
### Benchmark Creators
This benchmark was created by Mingda Chen, Zewei Chu and Kevin Gimpel during work done at the University of Chicago and the Toyota Technologival Institute at Chicago.
### Citation Information
```
@inproceedings{mchen-discoeval-19,
title = {Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations},
author = {Mingda Chen and Zewei Chu and Kevin Gimpel},
booktitle = {Proc. of {EMNLP}},
year={2019}
}
```
## Loading Data Examples
### Loading Data for Sentence Positioning Task with the Arxiv data source
```python
from datasets import load_dataset
# Load the Sentence Positioning dataset
dataset = load_dataset(path="OfekGlick/DiscoEval", name="SParxiv")
# Access the train, validation, and test splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
# Example usage: Print the first few training examples
for example in train_data[:5]:
print(example)
```
The other possible inputs for the `name` parameter are:
`SParxiv`, `SProcstory`, `SPwiki`, `SSPabs`, `PDTB-I`, `PDTB-E`, `BSOarxiv`, `BSOrocstory`, `BSOwiki`, `DCchat`, `DCwiki`, `RST` | [
-0.2556816041469574,
-0.8110112547874451,
0.3650718629360199,
0.26709631085395813,
-0.19685372710227966,
-0.04363057762384415,
-0.055037789046764374,
-0.2730157673358917,
-0.0998210683465004,
0.2590551972389221,
-0.36073991656303406,
-0.6846623420715332,
-0.5196130871772766,
0.115694478154... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VatsaDev/TinyText | VatsaDev | 2023-10-15T15:19:25Z | 127 | 26 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"code",
"region:us"
] | 2023-10-15T15:19:25Z | 2023-10-02T00:36:39.000Z | 2023-10-02T00:36:39 | ---
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- code
size_categories:
- 1M<n<10M
---
The entire NanoPhi Dataset is at train.jsonl
Separate Tasks Include
- Math (Metamath, mammoth)
- Code (Code Search Net)
- Logic (Open-platypus)
- Roleplay (PIPPA, RoleplayIO)
- Textbooks (Tiny-text, Sciphi)
- Textbook QA (Orca-text, Tiny-webtext) | [
-0.4795493185520172,
-0.3587476909160614,
0.18834593892097473,
0.10303743183612823,
0.127650648355484,
0.05619687959551811,
0.02007010206580162,
0.00536534795537591,
-0.08077254146337509,
0.6615421175956726,
-0.8341043591499329,
-0.4488191604614258,
-0.22561201453208923,
0.2808167636394501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/IndicWikiBio | ai4bharat | 2022-10-13T06:08:34Z | 126 | 0 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1960<n<11,502",
"source_datasets:none. Originally generated from www.wikimedia.org.",
"language:as",
"language:bn",
"language:hi",
"language:kn",
"language:ml",
"language:or",
"lan... | 2022-10-13T06:08:34Z | 2022-03-10T09:59:23.000Z | 2022-03-10T09:59:23 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- hi
- kn
- ml
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicWikiBio
size_categories:
- 1960<n<11,502
source_datasets:
- none. Originally generated from www.wikimedia.org.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-wikibio
---
# Dataset Card for "IndicWikiBio"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
The WikiBio dataset released as part of IndicNLG Suite. Each
example has four fields: id, infobox, serialized infobox and summary. We create this dataset in nine
languages including as, bn, hi, kn, ml, or, pa, ta, te. The total
size of the dataset is 57,426.
### Supported Tasks and Leaderboards
**Tasks:** WikiBio
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 26,
"infobox": "name_1:सी॰\tname_2:एल॰\tname_3:रुआला\toffice_1:सांसद\toffice_2:-\toffice_3:मिजोरम\toffice_4:लोक\toffice_5:सभा\toffice_6:निर्वाचन\toffice_7:क्षेत्र\toffice_8:।\toffice_9:मिजोरम\tterm_1:2014\tterm_2:से\tterm_3:2019\tnationality_1:भारतीय",
"serialized_infobox": "<TAG> name </TAG> सी॰ एल॰ रुआला <TAG> office </TAG> सांसद - मिजोरम लोक सभा निर्वाचन क्षेत्र । मिजोरम <TAG> term </TAG> 2014 से 2019 <TAG> nationality </TAG> भारतीय",
"summary": "सी॰ एल॰ रुआला भारत की सोलहवीं लोक सभा के सांसद हैं।"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `infobox (string)`: Raw Infobox.
- `serialized_infobox (string)`: Serialized Infobox as input.
- `summary (string)`: Summary of Infobox/First line of Wikipedia page.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Test | Val |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 1,300 | 391 | 381 |
Bengali | bn | 4,615 | 1,521 | 1,567 |
Hindi | hi | 5,684 | 1,919 | 1,853 |
Kannada | kn | 1,188 | 389 | 383 |
Malayalam | ml | 5,620 | 1,835 | 1,896 |
Oriya | or | 1,687 | 558 | 515 |
Punjabi | pa | 3,796 | 1,227 | 1,331 |
Tamil | ta | 8,169 | 2,701 | 2,632 |
Telugu | te | 2,594 | 854 | 820 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
None
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
| [
-0.5350375175476074,
-0.5788118839263916,
-0.14698095619678497,
0.3524123430252075,
-0.33297213912010193,
0.19931113719940186,
-0.7023279070854187,
-0.4821528196334839,
0.591633677482605,
0.2797814905643463,
-0.6523904800415039,
-0.9013240337371826,
-0.5935102105140686,
0.5799123048782349,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cfilt/iwn_wordlists | cfilt | 2022-11-23T12:06:02Z | 126 | 2 | plod-filtered | [
"task_categories:token-classification",
"annotations_creators:Shivam Mhaskar, Diptesh Kanojia",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:as",
"language:bn",
"language:mni",
"language:gu",
"language:hi",
"langua... | 2022-11-23T12:06:02Z | 2022-03-18T11:56:41.000Z | 2022-03-18T11:56:41 | ---
annotations_creators:
- Shivam Mhaskar, Diptesh Kanojia
language_creators:
- found
language:
- as
- bn
- mni
- gu
- hi
- kn
- ks
- kok
- ml
- mr
- or
- ne
- pa
- sa
- ta
- te
- ur
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: plod-filtered
pretty_name: 'PLOD: An Abbreviation Detection Dataset'
tags:
- abbreviation-detection
---
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# IWN Wordlists
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) [](https://twitter.com/cfiltnlp) [](https://twitter.com/PeopleCentredAI)
We provide the unique word list form the [IndoWordnet (IWN)](https://www.cfilt.iitb.ac.in/indowordnet/) knowledge base.
## Usage
```python
from datasets import load_dataset
language = "hindi" // supported languages: assamese, bengali, bodo, gujarati, hindi, kannada, kashmiri, konkani, malayalam, manipuri, marathi, meitei, nepali, oriya, punjabi, sanskrit, tamil, telugu, urdu.
words = load_dataset("cfilt/iwn_wordlists", language)
word_list = words["train"]["word"]
```
## Citation
```latex
@inproceedings{bhattacharyya2010indowordnet,
title={IndoWordNet},
author={Bhattacharyya, Pushpak},
booktitle={Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)},
year={2010}
}
``` | [
-0.3155289888381958,
-0.2907606065273285,
-0.10959316790103912,
0.5829112529754639,
-0.34325915575027466,
0.2512049674987793,
-0.4366039037704468,
-0.5050702095031738,
0.5453768968582153,
0.21481020748615265,
-0.7234097719192505,
-0.4260781705379486,
-0.7324100732803345,
0.1716450899839401... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/CelebA-HQ | huggan | 2022-04-12T14:10:49Z | 126 | 9 | null | [
"arxiv:1710.10196",
"region:us"
] | 2022-04-12T14:10:49Z | 2022-03-24T09:12:05.000Z | 2022-03-24T09:12:05 | # Citation
```
@article{DBLP:journals/corr/abs-1710-10196,
author = {Tero Karras and
Timo Aila and
Samuli Laine and
Jaakko Lehtinen},
title = {Progressive Growing of GANs for Improved Quality, Stability, and Variation},
journal = {CoRR},
volume = {abs/1710.10196},
year = {2017},
url = {http://arxiv.org/abs/1710.10196},
eprinttype = {arXiv},
eprint = {1710.10196},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-10196.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.5237878561019897,
-0.6698497533798218,
0.09401964396238327,
0.43218183517456055,
-0.06967472285032272,
-0.013867138884961605,
-0.08197170495986938,
-0.20749050378799438,
0.4845099151134491,
0.033346306532621384,
-0.40580084919929504,
-0.5403660535812378,
-0.27147480845451355,
0.35793811... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sled-umich/TRIP | sled-umich | 2022-10-14T19:17:29Z | 126 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-10-14T19:17:29Z | 2022-10-12T18:23:13.000Z | 2022-10-12T18:23:13 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: 'TRIP: Tiered Reasoning for Intuitive Physics'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# [TRIP - Tiered Reasoning for Intuitive Physics](https://aclanthology.org/2021.findings-emnlp.422/)
Official dataset for [Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding](https://aclanthology.org/2021.findings-emnlp.422/). Shane Storks, Qiaozi Gao, Yichi Zhang, Joyce Chai. EMNLP Findings, 2021.
For our official model and experiment code, please check [GitHub](https://github.com/sled-group/Verifiable-Coherent-NLU).
## Overview

We introduce Tiered Reasoning for Intuitive Physics (TRIP), a novel commonsense reasoning dataset with dense annotations that enable multi-tiered evaluation of machines’ reasoning process.
It includes dense annotations for each story capturing multiple tiers of reasoning beyond the end task. From these annotations, we propose a tiered evaluation, where given a pair of highly similar stories (differing only by one sentence which makes one of the stories implausible), systems must jointly identify (1) the plausible story, (2) a pair of conflicting sentences in the implausible story, and (3) the underlying physical states in those sentences causing the conflict. The goal of TRIP is to enable a systematic evaluation of machine coherence toward the end task prediction of plausibility. In particular, we evaluate whether a high-level plausibility prediction can be verified based on lower-level understanding, for example, physical state changes that would support the prediction.
## Download
```python
from datasets import load_dataset
dataset = load_dataset("sled-umich/TRIP")
```
* [HuggingFace-Dataset](https://huggingface.co/datasets/sled-umich/TRIP)
* [GitHub](https://github.com/sled-group/Verifiable-Coherent-NLU)
## Cite
```bibtex
@misc{storks2021tiered,
title={Tiered Reasoning for Intuitive Physics: Toward Verifiable Commonsense Language Understanding},
author={Shane Storks and Qiaozi Gao and Yichi Zhang and Joyce Chai},
year={2021},
booktitle={Findings of the Association for Computational Linguistics: EMNLP 2021},
location={Punta Cana, Dominican Republic},
publisher={Association for Computational Linguistics},
}
```
| [
-0.24794737994670868,
-0.8537356853485107,
0.6494402885437012,
0.2396976351737976,
0.06996266543865204,
0.11997929960489273,
-0.14309397339820862,
-0.467562735080719,
0.04857861250638962,
0.2958239018917084,
-0.5199008584022522,
-0.4124812185764313,
-0.24833428859710693,
0.0723817721009254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ashraq/hotel-reviews | ashraq | 2022-10-27T17:24:29Z | 126 | 1 | null | [
"region:us"
] | 2022-10-27T17:24:29Z | 2022-10-27T17:22:07.000Z | 2022-10-27T17:22:07 | ---
dataset_info:
features:
- name: review_date
dtype: string
- name: hotel_name
dtype: string
- name: review
dtype: string
splits:
- name: train
num_bytes: 15043294
num_examples: 93757
download_size: 6100544
dataset_size: 15043294
---
# Dataset Card for "hotel-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/jiashenliu/515k-hotel-reviews-data-in-europe) | [
-0.6471289396286011,
-0.3929433822631836,
0.3917583227157593,
0.07326685637235641,
-0.21981072425842285,
-0.20021672546863556,
0.01698351837694645,
-0.38887882232666016,
0.8127062320709229,
0.6276605129241943,
-0.8027364611625671,
-0.8631864190101624,
-0.2546440660953522,
0.114310659468173... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vegaviazhang/Med_QQpairs | vegaviazhang | 2023-06-16T03:35:25Z | 126 | 3 | null | [
"license:cc0-1.0",
"region:us"
] | 2023-06-16T03:35:25Z | 2023-06-16T03:33:41.000Z | 2023-06-16T03:33:41 | ---
license: cc0-1.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/ami-ihm-timestamped | distil-whisper | 2023-09-25T10:30:13Z | 126 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-09-22T09:05:01.000Z | 2023-09-22T09:05:01 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI IHM
---
# Distil Whisper: AMI IHM With Timestamps
This is a variant of the [AMI IHM](https://huggingface.co/datasets/edinburghcstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghcstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
| [
-0.21574698388576508,
-0.6255884170532227,
0.21369107067584991,
0.48257923126220703,
-0.23678342998027802,
0.10571800172328949,
-0.02555043064057827,
-0.30645424127578735,
0.364065557718277,
0.3682234287261963,
-0.8993737697601318,
-0.44030338525772095,
-0.6693173050880432,
0.1046156957745... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/voxpopuli-timestamped | distil-whisper | 2023-09-25T10:30:13Z | 126 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-09-22T09:05:12.000Z | 2023-09-22T09:05:12 | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: VoxPopuli
---
# Distil Whisper: VoxPopuli With Timestamps
This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/facebook/voxpopuli).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
| [
-0.13802269101142883,
-0.8077684640884399,
0.18756191432476044,
0.49629223346710205,
-0.17465677857398987,
0.10433964431285858,
-0.1284848153591156,
-0.2037370204925537,
0.419419527053833,
0.30911141633987427,
-0.8476447463035583,
-0.46133002638816833,
-0.5478321313858032,
0.00685062538832... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
manu/test_topicbasednli | manu | 2023-11-08T15:22:48Z | 126 | 0 | null | [
"region:us"
] | 2023-11-08T15:22:48Z | 2023-10-07T18:11:10.000Z | 2023-10-07T18:11:10 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: text
dtype: string
- name: topic
dtype: string
- name: source
dtype: string
- name: polarity
dtype: string
- name: place_name
dtype: string
- name: industry
dtype: string
- name: rating
dtype: int64
- name: id
dtype: int64
splits:
- name: test
num_bytes: 178719
num_examples: 400
- name: valid
num_bytes: 49899
num_examples: 100
download_size: 124973
dataset_size: 228618
---
# Dataset Card for "test_topicbasednli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6760928630828857,
-0.4354937672615051,
-0.012572228908538818,
0.2203010618686676,
-0.11393050849437714,
-0.04215800017118454,
0.1556849181652069,
0.12236179411411285,
0.95612633228302,
0.3670780062675476,
-1.002218246459961,
-0.6533098220825195,
-0.3643255829811096,
-0.3323085308074951,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
datastax/philosopher-quotes | datastax | 2023-10-11T07:55:38Z | 126 | 0 | null | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:cc",
"code",
"region:us"
] | 2023-10-11T07:55:38Z | 2023-10-10T12:02:19.000Z | 2023-10-10T12:02:19 | ---
license: cc
task_categories:
- conversational
language:
- en
tags:
- code
pretty_name: Philosophers Quotes
size_categories:
- n<1K
---
450 quotes by 9 philosophers (50 quotes each), labeled with the author and with a variable number of topic tags.
The quotes originally come from https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers (CC BY-NC-SA 4.0).
The text of each quote has been cleaned of soft-hyphens (`\xad`) and other weird characters.
The topic labeling has been done with a default HuggingFace zero-shot classifier pipeline with multi_labels. | [
-0.8205190896987915,
-0.5204902291297913,
0.5796033143997192,
-0.12080904841423035,
-0.4733724594116211,
-0.023256929591298103,
-0.067354217171669,
-0.23556087911128998,
0.3217772841453552,
0.9310240149497986,
-0.8329257965087891,
-0.16363121569156647,
-0.5657020807266235,
0.16457767784595... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
haryoaw/COPAL | haryoaw | 2023-11-11T12:37:25Z | 126 | 3 | null | [
"task_categories:multiple-choice",
"size_categories:n<1K",
"language:id",
"license:cc-by-sa-4.0",
"arxiv:2311.01012",
"region:us"
] | 2023-11-11T12:37:25Z | 2023-10-28T14:35:55.000Z | 2023-10-28T14:35:55 | ---
license: cc-by-sa-4.0
task_categories:
- multiple-choice
language:
- id
size_categories:
- n<1K
configs:
- config_name: id
data_files:
- split: test
path: test_copal.csv
- split: test_colloquial
path: test_copal_colloquial.csv
---
## Paper
URL Arxiv: https://arxiv.org/abs/2311.01012
We present publicly available COPAL-ID, a novel Indonesian language common sense reasoning dataset. Unlike the previous Indonesian COPA dataset (XCOPA-ID), COPAL-ID incorporates Indonesian local and cultural nuances, and therefore, provides a more natural portrayal of day-to-day causal reasoning within the Indonesian cultural sphere. Professionally written by natives from scratch, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID. In addition, we present COPAL-ID in both standard Indonesian and in Jakartan Indonesian--a dialect commonly used in daily conversation. COPAL-ID poses a greater challenge for existing open-sourced and closed state-of-the-art multilingual language models, yet is trivially easy for humans. Our findings suggest that even the current best open-source, multilingual model struggles to perform well, achieving 65.47% accuracy on COPAL-ID, significantly lower than on the culturally-devoid XCOPA-ID (79.40%). Despite GPT-4's impressive score, it suffers the same performance degradation compared to its XCOPA-ID score, and it still falls short of human performance. This shows that these language models are still way behind in comprehending the local nuances of Indonesian.
## How to Use
```py
from datasets import load_dataset
copal_id_dataset = load_dataset('haryoaw/COPAL', 'id', subset='test')
copal_id_colloquial_dataset = load_dataset('haryoaw/COPAL', 'id', subset='test_colloquial')
```
## Cite Our Work
```
@article{wibowo2023copal,
title={COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances},
author={Wibowo, Haryo Akbarianto and Fuadi, Erland Hilman and Nityasya, Made Nindyatama and Prasojo, Radityo Eko and Aji, Alham Fikri},
journal={arXiv preprint arXiv:2311.01012},
year={2023}
}
``` | [
-0.2619943618774414,
-0.7213615775108337,
0.2579248547554016,
0.3585399091243744,
-0.3281760811805725,
-0.12139555811882019,
-0.4256935715675354,
-0.7748263478279114,
-0.03165517374873161,
0.5199872851371765,
-0.16142484545707703,
-0.7112376093864441,
-0.20324338972568512,
0.45290687680244... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
celsowm/cnn_news_ptbr | celsowm | 2023-11-28T02:51:08Z | 126 | 0 | null | [
"region:us"
] | 2023-11-28T02:51:08Z | 2023-11-04T13:04:04.000Z | 2023-11-04T13:04:04 | ---
dataset_info:
features:
- name: titulo
dtype: string
- name: data_hora
dtype: string
- name: resumo
dtype: string
- name: categoria
dtype: string
- name: texto
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 7668368
num_examples: 3038
download_size: 4313781
dataset_size: 7668368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cnn_news_ptbr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5921134948730469,
-0.4510359466075897,
0.07689286023378372,
0.4828532338142395,
-0.6195679903030396,
0.13111315667629242,
0.1630818247795105,
-0.012958439998328686,
0.5110133290290833,
0.3398636281490326,
-0.5156176090240479,
-0.9216691255569458,
-0.8245158195495605,
-0.3621955811977386... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mpachauri/TrainingDataset | mpachauri | 2023-11-06T15:21:41Z | 126 | 0 | null | [
"region:us"
] | 2023-11-06T15:21:41Z | 2023-11-06T14:02:06.000Z | 2023-11-06T14:02:06 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lala8383/hw3 | Lala8383 | 2023-11-21T03:53:53Z | 126 | 0 | null | [
"region:us"
] | 2023-11-21T03:53:53Z | 2023-11-21T03:35:26.000Z | 2023-11-21T03:35:26 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lewtun/my-awesome-dataset | lewtun | 2022-07-03T05:16:07Z | 125 | 0 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-07-03T05:16:07Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.3943271040916443,
-0.5129885077476501,
0.054255615919828415,
0.2910892963409424,
-0.17377373576164246,
0.18770764768123627,
-0.42581048607826233,
-0.3550795614719391,
0.5684940814971924,
0.544796347618103,
-0.896458625793457,
-1.1392998695373535,
-0.6151477694511414,
0.05347051844000816... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
agemagician/NetSurfP-SS3 | agemagician | 2022-04-18T03:43:55Z | 125 | 1 | null | [
"region:us"
] | 2022-04-18T03:43:55Z | 2022-04-18T03:43:51.000Z | 2022-04-18T03:43:51 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mbazaNLP/kinyarwanda-tts-dataset | mbazaNLP | 2023-06-27T08:09:28Z | 125 | 1 | null | [
"language_creators:Digital Umuganda",
"size_categories:3K<n<4K",
"size_categories:~6hours",
"language:rw",
"license:cc-by-4.0",
"region:us"
] | 2023-06-27T08:09:28Z | 2022-05-27T08:20:36.000Z | 2022-05-27T08:20:36 | ---
language:
- rw
language_creators:
- "Digital Umuganda"
license:
- cc-by-4.0
size_categories:
- 3K<n<4K
- ~6hours
---
# Kinyarwanda TTS dataset
The dataset consists of 3992 clips of Kinyarwanda TTS corpus recorded in a studio using a voice actress, it was collected in the mbaza project
## Data structure
```
Audio: 3992 Single voice studio recordings by a voice actress
Text: CSV with audio name and corresponding written text
```
## Language
The corresponding dataset is in the Kinyarwanda Language
## Dataset Creation
- Text collected had to include Kinyarwanda syllabes, which is made by a combination of a consonant or a group of consonats (e.g. Nyw) and a vowel.
- Text were reviewed by a linguist to ensure the text fit kinyarwanda standards
- The voice were recorded in a studio albeit in a semi-professional settings (i.e. some of the audio contains reverbs)
| [
-0.36905744671821594,
-0.44126948714256287,
-0.22268420457839966,
0.014686780981719494,
-0.08660632371902466,
0.215382382273674,
-0.004563234280794859,
-0.17931389808654785,
0.6468899250030518,
0.7328138947486877,
-0.6615457534790039,
-0.6116423010826111,
-0.595064103603363,
0.233973234891... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cakiki/rosetta-code | cakiki | 2023-09-24T10:17:35Z | 125 | 14 | null | [
"language:code",
"license:gfdl",
"region:us"
] | 2023-09-24T10:17:35Z | 2022-06-28T20:41:33.000Z | 2022-06-28T20:41:33 | ---
license: gfdl
language: code
---
# Dataset Card for the Rosetta Code Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
```
['ALGOL 68', 'Arturo', 'AWK', 'F#', 'Factor', 'Go', 'J', 'jq', 'Julia', 'Lua', 'Mathematica/Wolfram Language',
'Perl', 'Phix', 'Picat', 'Python', 'Quackery', 'Raku', 'Ring', 'Sidef', 'Vlang', 'Wren', 'XPL0', '11l',
'68000 Assembly', '8th', 'AArch64 Assembly', 'ABAP', 'ACL2', 'Action!', 'ActionScript', 'Ada', 'Aime', 'ALGOL W',
'Amazing Hopper', 'AntLang', 'Apex', 'APL', 'AppleScript', 'ARM Assembly', 'ATS', 'AutoHotkey', 'AutoIt', 'Avail',
'Babel', 'bash', 'BASIC', 'BASIC256', 'BQN', 'Bracmat', 'Burlesque', 'C', 'C#', 'C++', 'Ceylon', 'Clojure', 'COBOL',
'CoffeeScript', 'Common Lisp', 'Component Pascal', 'Crystal', 'D', 'Delphi', 'Dyalect', 'E', 'EasyLang', 'EchoLisp',
'ECL', 'Efene', 'EGL', 'Ela', 'Elena', 'Elixir', 'Elm', 'Emacs Lisp', 'Erlang', 'ERRE', 'Euphoria', 'Fantom', 'FBSL',
'Forth', 'Fortran', 'Free Pascal', 'FreeBASIC', 'Frink', 'FunL', 'Futhark', 'FutureBasic', 'Gambas', 'GAP', 'Genie',
'GLSL', 'Gosu', 'Groovy', 'Haskell', 'HicEst', 'Hy', 'i', 'Icon and Unicon', 'IDL', 'Idris', 'Inform 7', 'Ioke', 'Java',
'JavaScript', 'K', 'Klingphix', 'Klong', 'Kotlin', 'LabVIEW', 'Lambdatalk', 'Lang5', 'langur', 'Lasso', 'LFE', 'Liberty BASIC',
'LIL', 'Limbo', 'Lingo', 'Little', 'Logo', 'M2000 Interpreter', 'Maple', 'Mathcad', 'Mathematica / Wolfram Language',
'MATLAB / Octave', 'Maxima', 'Mercury', 'min', 'MiniScript', 'Nanoquery', 'Neko', 'Nemerle', 'NetRexx', 'NewLISP', 'Nial',
'Nim', 'Oberon-2', 'Objeck', 'Objective-C', 'OCaml', 'Oforth', 'Onyx', 'ooRexx', 'Order', 'OxygenBasic', 'Oz', 'PARI/GP',
'Pascal', 'Phixmonti', 'PHP', 'PicoLisp', 'Pike', 'PL/I', 'Pony', 'PostScript', 'PowerShell', 'Processing', 'Prolog',
'PureBasic', 'Q', 'QBasic', 'QB64', 'R', 'Racket', 'RapidQ', 'REBOL', 'Red', 'ReScript', 'Retro', 'REXX', 'RLaB', 'Ruby',
'Rust', 'S-lang', 'SASL', 'Scala', 'Scheme', 'Seed7', 'SenseTalk', 'SETL', 'Simula', '360 Assembly', '6502 Assembly', 'Slate',
'Smalltalk', 'Ol', 'SNOBOL4', 'Standard ML', 'Stata', 'Swift', 'Tailspin', 'Tcl', 'TI-89 BASIC', 'Trith', 'UNIX Shell',
'Ursa', 'Vala', 'VBA', 'VBScript', 'Visual Basic .NET', 'Wart', 'BaCon', 'Bash', 'Yabasic', 'Yacas', 'Batch File', 'Yorick',
'Z80 Assembly', 'BBC BASIC', 'Brat', 'zkl', 'zonnon', 'Zsh', 'ZX Spectrum Basic', 'Clipper/XBase++', 'ColdFusion', 'Dart',
'DataWeave', 'Dragon', 'FurryScript', 'Fōrmulæ', 'Harbour', 'hexiscript', 'Hoon', 'Janet', '0815', 'Jsish', 'Latitude', 'LiveCode',
'Aikido', 'AmigaE', 'MiniZinc', 'Asymptote', 'NGS', 'bc', 'Befunge', 'Plorth', 'Potion', 'Chef', 'Clipper', 'Relation', 'Robotic',
'dc', 'DCL', 'DWScript', 'Shen', 'SPL', 'SQL', 'Eiffel', 'Symsyn', 'Emojicode', 'TI-83 BASIC', 'Transd', 'Excel', 'Visual Basic',
'FALSE', 'WDTE', 'Fermat', 'XLISP', 'Zig', 'friendly interactive shell', 'Zoea', 'Zoea Visual', 'GEORGE', 'Haxe', 'HolyC', 'LSE64',
'M4', 'MAXScript', 'Metafont', 'МК-61/52', 'ML/I', 'Modula-2', 'Modula-3', 'MUMPS', 'NSIS', 'Openscad', 'Panda', 'PHL', 'Piet',
'Plain English', 'Pop11', 'ProDOS', '8051 Assembly', 'Python 3.x Long Form', 'Raven', 'ALGOL 60', 'Run BASIC', 'Sass/SCSS', 'App Inventor',
'smart BASIC', 'SNUSP', 'Arendelle', 'SSEM', 'Argile', 'Toka', 'TUSCRIPT', '4DOS Batch', '8080 Assembly', 'Vedit macro language',
'8086 Assembly', 'Axe', 'Elisa', 'Verilog', 'Vim Script', 'x86 Assembly', 'Euler Math Toolbox', 'Acurity Architect', 'XSLT', 'BML',
'Agena', 'Boo', 'Brainf***', 'LLVM', 'FOCAL', 'Frege', 'ALGOL-M', 'ChucK', 'Arbre', 'Clean', 'Hare', 'MATLAB', 'Astro', 'Applesoft BASIC',
'OOC', 'Bc', 'Computer/zero Assembly', 'SAS', 'Axiom', 'B', 'Dao', 'Caché ObjectScript', 'CLU', 'Scilab', 'DBL', 'Commodore BASIC', 'Diego',
'Dc', 'BCPL', 'Alore', 'Blade', 'Déjà Vu', 'Octave', 'Cowgol', 'BlitzMax', 'Falcon', 'BlooP', 'SequenceL', 'Sinclair ZX81 BASIC', 'GW-BASIC',
'Lobster', 'C1R', 'Explore', 'Clarion', 'Locomotive Basic', 'GUISS', 'Clio', 'TXR', 'Ursala', 'CLIPS', 'Microsoft Small Basic', 'Golfscript',
'Beads', 'Coco', 'Little Man Computer', 'Chapel', 'Comal', 'Curry', 'GML', 'NewLisp', 'Coq', 'Gastona', 'uBasic/4tH', 'Pyret', 'Dhall',
'Plain TeX', 'Halon', 'Wortel', 'FormulaOne', 'Dafny', 'Ksh', 'Eero', 'Fan', 'Draco', 'DUP', 'Io', 'Metapost', 'Logtalk', 'Dylan', 'TI-83_BASIC',
'Sather', 'Rascal', 'SIMPOL', 'IS-BASIC', 'KonsolScript', 'Pari/Gp', 'Genyris', 'EDSAC order code', 'Egel', 'Joy', 'lang5', 'XProc', 'XQuery',
'POV-Ray', 'Kitten', 'Lisaac', 'LOLCODE', 'SVG', 'MANOOL', 'LSL', 'Moonscript', 'Fhidwfe', 'Inspired by Rascal', 'Fish', 'MIPS Assembly',
'Monte', 'FUZE BASIC', 'NS-HUBASIC', 'Qi', 'GDScript', 'Glee', 'SuperCollider', 'Verbexx', 'Huginn', 'I', 'Informix 4GL', 'Isabelle', 'KQL',
'lambdatalk', 'RPG', 'Lhogho', 'Lily', 'xTalk', 'Scratch', 'Self', 'MAD', 'RATFOR', 'OpenEdge/Progress', 'Xtend', 'Suneido', 'Mirah',
'mIRC Scripting Language', 'ContextFree', 'Tern', 'MMIX', 'AmigaBASIC', 'AurelBasic', 'TorqueScript', 'MontiLang', 'MOO', 'MoonScript',
'Unicon', 'fermat', 'q', 'Myrddin', 'உயிர்/Uyir', 'MySQL', 'newLISP', 'VHDL', 'Oberon', 'Wee Basic', 'OpenEdge ABL/Progress 4GL', 'X86 Assembly',
'XBS', 'KAP', 'Perl5i', 'Peloton', 'PL/M', 'PL/SQL', 'Pointless', 'Polyglot:PL/I and PL/M', 'ToffeeScript', 'TMG', 'TPP', 'Pure', 'Pure Data',
'Xidel', 'S-BASIC', 'Salmon', 'SheerPower 4GL', 'Sparkling', 'Spin', 'SQL PL', 'Transact-SQL', 'True BASIC', 'TSE SAL', 'Tiny BASIC', 'TypeScript',
'Uniface', 'Unison', 'UTFool', 'VAX Assembly', 'VTL-2', 'Wrapl', 'XBasic', 'Xojo', 'XSLT 1.0', 'XSLT 2.0', 'MACRO-10', 'ANSI Standard BASIC',
'UnixPipes', 'REALbasic', 'Golo', 'DM', 'X86-64 Assembly', 'GlovePIE', 'PowerBASIC', 'LotusScript', 'TIScript', 'Kite', 'V', 'Powershell', 'Vorpal',
'Never', 'Set lang', '80386 Assembly', 'Furor', 'Input conversion with Error Handling', 'Guile', 'ASIC', 'Autolisp', 'Agda', 'Swift Playground',
'Nascom BASIC', 'NetLogo', 'CFEngine', 'OASYS Assembler', 'Fennel', 'Object Pascal', 'Shale', 'GFA Basic', 'LDPL', 'Ezhil', 'SMEQL', 'tr', 'WinBatch',
'XPath 2.0', 'Quite BASIC', 'Gema', '6800 Assembly', 'Applescript', 'beeswax', 'gnuplot', 'ECMAScript', 'Snobol4', 'Blast', 'C/C++', 'Whitespace',
'Blue', 'C / C++', 'Apache Derby', 'Lychen', 'Oracle', 'Alternative version', 'PHP+SQLite', 'PILOT', 'PostgreSQL', 'PowerShell+SQLite', 'PureBasic+SQLite',
'Python+SQLite', 'SQLite', 'Tcl+SQLite', 'Transact-SQL (MSSQL)', 'Visual FoxPro', 'SmileBASIC', 'Datalog', 'SystemVerilog', 'Smart BASIC', 'Snobol', 'Terraform',
'ML', 'SQL/PostgreSQL', '4D', 'ArnoldC', 'ANSI BASIC', 'Delphi/Pascal', 'ooREXX', 'Dylan.NET', 'CMake', 'Lucid', 'XProfan', 'sed', 'Gnuplot', 'RPN (HP-15c)',
'Sed', 'JudoScript', 'ScriptBasic', 'Unix shell', 'Niue', 'Powerbuilder', 'C Shell', 'Zoomscript', 'MelonBasic', 'ScratchScript', 'SimpleCode', 'OASYS',
'HTML', 'tbas', 'LaTeX', 'Lilypond', 'MBS', 'B4X', 'Progress', 'SPARK / Ada', 'Arc', 'Icon', 'AutoHotkey_L', 'LSE', 'N/t/roff', 'Fexl', 'Ra', 'Koka',
'Maclisp', 'Mond', 'Nix', 'ZED', 'Inform 6', 'Visual Objects', 'Cind', 'm4', 'g-fu', 'pascal', 'Jinja', 'Mathprog', 'Rhope', 'Delphi and Pascal', 'Epoxy',
'SPARK', 'B4J', 'DIBOL-11', 'JavaFX Script', 'Pixilang', 'BASH (feat. sed & tr)', 'zig', 'Web 68', 'Shiny', 'Egison', 'OS X sha256sum', 'AsciiDots',
'FileMaker', 'Unlambda', 'eC', 'GLBasic', 'JOVIAL', 'haskell', 'Atari BASIC', 'ANTLR', 'Cubescript', 'OoRexx', 'WebAssembly', 'Woma', 'Intercal', 'Malbolge',
'LiveScript', 'Fancy', 'Detailed Description of Programming Task', 'Lean', 'GeneXus', 'CafeOBJ', 'TechBASIC', 'blz', 'MIRC Scripting Language', 'Oxygene',
'zsh', 'Make', 'Whenever', 'Sage', 'L++', 'Tosh', 'LC3 Assembly', 'SETL4', 'Pari/GP', 'OxygenBasic x86 Assembler', 'Pharo', 'Binary Lambda Calculus', 'Bob',
'bootBASIC', 'Turing', 'Ultimate++', 'Gabuzomeu', 'HQ9+', 'INTERCAL', 'Lisp', 'NASM', 'SPWN', 'Turbo Pascal', 'Nickle', 'SPAD', 'Mozart/Oz', 'Batch file',
'SAC', 'C and C++', 'vbscript', 'OPL', 'Wollok', 'Pascal / Delphi / Free Pascal', 'GNU make', 'Recursive', 'C3', 'Picolisp', 'Note 1', 'Note 2', 'Visual Prolog',
'ivy', 'k', 'clojure', 'Unix Shell', 'Basic09', 'S-Basic', 'FreePascal', 'Wolframalpha', 'c_sharp', 'LiveCode Builder', 'Heron', 'SPSS', 'LibreOffice Basic',
'PDP-11 Assembly', 'Solution with recursion', 'Lua/Torch', 'tsql', 'Transact SQL', 'X++', 'Xanadu', 'GDL', 'C_sharp', 'TutorialD', 'Glagol', 'Basic', 'Brace',
'Cixl', 'ELLA', 'Lox', 'Node.js', 'Generic', 'Hope', 'Snap!', 'TSQL', 'MathCortex', 'Mathmap', 'TI-83 BASIC, TI-89 BASIC', 'ZPL', 'LuaTeX', 'AmbientTalk',
'Alternate version to handle 64 and 128 bit integers.', 'Crack', 'Corescript', 'Fortress', 'GB BASIC', 'IWBASIC', 'RPL', 'DMS', 'dodo0', 'MIXAL', 'Occam',
'Morfa', 'Snabel', 'ObjectIcon', 'Panoramic', 'PeopleCode', 'Monicelli', 'gecho', 'Hack', 'JSON', 'Swym', 'ReasonML', 'make', 'TOML', 'WEB', 'SkookumScript',
'Batch', 'TransFORTH', 'Assembly', 'Iterative', 'LC-3', 'Quick Basic/QBASIC/PDS 7.1/VB-DOS', 'Turbo-Basic XL', 'GNU APL', 'OOCalc', 'QUACKASM', 'VB-DOS',
'Typescript', 'x86-64 Assembly', 'FORTRAN', 'Furryscript', 'Gridscript', 'Necromantus', 'HyperTalk', 'Biferno', 'AspectJ', 'SuperTalk', 'Rockstar', 'NMAKE.EXE',
'Opa', 'Algae', 'Anyways', 'Apricot', 'AutoLISP', 'Battlestar', 'Bird', 'Luck', 'Brlcad', 'C++/CLI', 'C2', 'Casio BASIC', 'Cat', 'Cduce', 'Clay', 'Cobra',
'Comefrom0x10', 'Creative Basic', 'Integer BASIC', 'DDNC', 'DeviousYarn', 'DIV Games Studio', 'Wisp', 'AMPL', 'Pare', 'PepsiScript', 'Installing Processing',
'Writing your first program', 'batari Basic', 'Jack', 'elastiC', 'TI-83 Hex Assembly', 'Extended BrainF***', '1C', 'PASM', 'Pict', 'ferite', 'Bori', 'RASEL',
'Echolisp', 'XPath', 'MLite', 'HPPPL', 'Gentee', 'JSE', 'Just Basic', 'Global Script', 'Nyquist', 'HLA', 'Teradata Stored Procedure', 'HTML5', 'Portugol',
'UBASIC', 'NOWUT', 'Inko', 'Jacquard Loom', 'JCL', 'Supernova', 'Small Basic', 'Kabap', 'Kaya', 'Kdf9 Usercode', 'Keg', 'KSI', 'Gecho', 'Gri', 'VBA Excel',
'Luna', 'MACRO-11', 'MINIL', 'Maude', 'MDL', 'Mosaic', 'Purity', 'MUF', 'MyDef', 'MyrtleScript', 'Mythryl', 'Neat', 'ThinBASIC', 'Nit', 'NLP++', 'Odin', 'OpenLisp',
'PDP-1 Assembly', 'Peylang', 'Pikachu', 'NESL', 'PIR', 'Plan', 'Programming Language', 'PROMAL', 'PSQL', 'Quill', 'xEec', 'RED', 'Risc-V', 'RTL/2', 'Sing', 'Sisal',
'SoneKing Assembly', 'SPARC Assembly', 'Swahili', 'Teco', 'Terra', 'TestML', 'Viua VM assembly', 'Whiley', 'Wolfram Language', 'X10', 'Quack', 'K4', 'XL', 'MyHDL',
'JAMES II/Rule-based Cellular Automata', 'APEX', 'QuickBASIC 4.5', 'BrightScript (for Roku)', 'Coconut', 'CSS', 'MapBasic', 'Gleam', 'AdvPL', 'Iptscrae', 'Kamailio Script',
'KL1', 'MEL', 'NATURAL', 'NewtonScript', 'PDP-8 Assembly', 'FRISC Assembly', 'Amstrad CPC Locomotive BASIC', 'Ruby with RSpec', 'php', 'Small', 'Lush', 'Squirrel',
'PL/pgSQL', 'XMIDAS', 'Rebol', 'embedded C for AVR MCU', 'FPr', 'Softbridge BASIC', 'StreamIt', 'jsish', 'JScript.NET', 'MS-DOS', 'Beeswax', 'eSQL', 'QL SuperBASIC',
'Rapira', 'Jq', 'scheme', 'oberon-2', '{{header|Vlang}', 'XUL', 'Soar', 'Befunge 93', 'Bash Shell', 'JacaScript', 'Xfractint', 'JoCaml', 'JotaCode', 'Atari Basic',
'Stretch 1', 'CFScript', 'Stretch 2', 'RPGIV', 'Shell', 'Felix', 'Flex', 'kotlin', 'Deluge', 'ksh', 'OCTAVE', 'vbScript', 'Javascript/NodeJS', 'Coffeescript',
'MS SmallBasic', 'Setl4', 'Overview', '1. Grid structure functions', '2. Calendar data functions', '3. Output configuration', 'WYLBUR', 'Mathematica/ Wolfram Language',
'Commodore Basic', 'Wolfram Language/Mathematica', 'Korn Shell', 'PARIGP', 'Metal', 'VBA (Visual Basic for Application)', 'Lolcode', 'mLite', 'z/Arch Assembler',
"G'MIC", 'C# and Visual Basic .NET', 'Run Basic', 'FP', 'XEmacs Lisp', 'Mathematica//Wolfram Language', 'RPL/2', 'Ya', 'JavaScript + HTML', 'JavaScript + SVG',
'Quick BASIC', 'MatLab', 'Pascal and Object Pascal', 'Apache Ant', 'rust', 'VBA/Visual Basic', 'Go!', 'Lambda Prolog', 'Monkey']
```
## Dataset Structure
### Data Instances
First row:
```
{'task_url': 'http://rosettacode.org/wiki/Ascending_primes',
'task_name': 'Ascending primes',
'task_description': "Generate and show all primes with strictly ascending decimal digits.\n\nAside: Try solving without peeking at existing solutions. I had a weird idea for generating\na prime sieve faster, which needless to say didn't pan out. The solution may be p(r)etty trivial\nbut generating them quickly is at least mildly interesting.\nTip: filtering all 7,027,260 primes below 123,456,789 probably won't kill you, but there is\nat least one significantly better and much faster way, needing a mere 511 odd/prime tests.\n\n\n\nSee also\n OEIS:A052015 - Primes with distinct digits in ascending order\n\n\nRelated\n\nPrimes with digits in nondecreasing order (infinite series allowing duplicate digits, whereas this isn't and doesn't)\nPandigital prime (whereas this is the smallest, with gaps in the used digits being permitted)\n\n",
'language_url': '#ALGOL_68',
'language_name': 'ALGOL 68'}
```
Code:
```
BEGIN # find all primes with strictly increasing digits #
PR read "primes.incl.a68" PR # include prime utilities #
PR read "rows.incl.a68" PR # include array utilities #
[ 1 : 512 ]INT primes; # there will be at most 512 (2^9) primes #
INT p count := 0; # number of primes found so far #
FOR d1 FROM 0 TO 1 DO
INT n1 = d1;
FOR d2 FROM 0 TO 1 DO
INT n2 = IF d2 = 1 THEN ( n1 * 10 ) + 2 ELSE n1 FI;
FOR d3 FROM 0 TO 1 DO
INT n3 = IF d3 = 1 THEN ( n2 * 10 ) + 3 ELSE n2 FI;
FOR d4 FROM 0 TO 1 DO
INT n4 = IF d4 = 1 THEN ( n3 * 10 ) + 4 ELSE n3 FI;
FOR d5 FROM 0 TO 1 DO
INT n5 = IF d5 = 1 THEN ( n4 * 10 ) + 5 ELSE n4 FI;
FOR d6 FROM 0 TO 1 DO
INT n6 = IF d6 = 1 THEN ( n5 * 10 ) + 6 ELSE n5 FI;
FOR d7 FROM 0 TO 1 DO
INT n7 = IF d7 = 1 THEN ( n6 * 10 ) + 7 ELSE n6 FI;
FOR d8 FROM 0 TO 1 DO
INT n8 = IF d8 = 1 THEN ( n7 * 10 ) + 8 ELSE n7 FI;
FOR d9 FROM 0 TO 1 DO
INT n9 = IF d9 = 1 THEN ( n8 * 10 ) + 9 ELSE n8 FI;
IF n9 > 0 THEN
IF is probably prime( n9 ) THEN
# have a prime with strictly ascending digits #
primes[ p count +:= 1 ] := n9
FI
FI
OD
OD
OD
OD
OD
OD
OD
OD
OD;
QUICKSORT primes FROMELEMENT 1 TOELEMENT p count; # sort the primes #
FOR i TO p count DO # display the primes #
print( ( " ", whole( primes[ i ], -8 ) ) );
IF i MOD 10 = 0 THEN print( ( newline ) ) FI
OD
END
```
### Data Fields
```
Dataset({
features: ['task_url', 'task_name', 'task_description', 'language_url', 'language_name', 'code'],
num_rows: 79013
})
```
### Data Splits
The dataset only contains one split, namely the "train" split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
To cite the Rosetta Code webiste you can use the following bibtex entry:
```json
@misc{rosetta-code,
author = "Rosetta Code",
title = "Rosetta Code --- Rosetta Code{,} ",
year = "2022",
url = "https://rosettacode.org/w/index.php?title=Rosetta_Code&oldid=322370",
note = "[Online; accessed 8-December-2022]"
}
```
### Contributions
Thanks to [@christopher](https://twitter.com/christopher) for adding this dataset. | [
-0.7062667012214661,
-0.4772717356681824,
0.22862493991851807,
0.2945987284183502,
-0.09494983404874802,
0.6083616018295288,
-0.16232803463935852,
-0.16695106029510498,
0.6310575008392334,
0.32257845997810364,
-0.7758921384811401,
-0.8987621068954468,
-0.42627763748168945,
0.30185046792030... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-350000-400000 | tomekkorbak | 2022-10-03T18:43:48Z | 125 | 0 | null | [
"region:us"
] | 2022-10-03T18:43:48Z | 2022-10-03T18:43:39.000Z | 2022-10-03T18:43:39 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/linnaeus | bigbio | 2022-12-22T15:44:50Z | 125 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-12-22T15:44:50Z | 2022-11-13T22:09:07.000Z | 2022-11-13T22:09:07 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: LINNAEUS
homepage: http://linnaeus.sourceforge.net/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for LINNAEUS
## Dataset Description
- **Homepage:** http://linnaeus.sourceforge.net/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
Linnaeus is a novel corpus of full-text documents manually annotated for species mentions.
## Citation Information
```
@Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
}
```
| [
-0.37674030661582947,
-0.08580043166875839,
0.2504698932170868,
0.1297415792942047,
-0.6177823543548584,
-0.21087050437927246,
-0.03304308280348778,
-0.4434455335140228,
0.7790793776512146,
0.3362842798233032,
-0.36710256338119507,
-0.9032830595970154,
-0.495797723531723,
0.717532694339752... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lmqg/qa_squadshifts_synthetic | lmqg | 2023-01-15T14:25:15Z | 125 | 0 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2023-01-15T14:25:15Z | 2022-12-20T08:31:18.000Z | 2022-12-20T08:31:18 | ---
license: cc-by-4.0
pretty_name: Synthetic QA dataset on SQuADShifts.
language: en
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | [
-0.5394394993782043,
-0.9740419387817383,
0.3817245066165924,
0.05386797711253166,
-0.25596916675567627,
0.22932776808738708,
0.02755131758749485,
-0.2769918441772461,
0.18923519551753998,
0.40224790573120117,
-1.1380208730697632,
-0.6320616006851196,
-0.1045575886964798,
0.314716160297393... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ds4sd/DocLayNet | ds4sd | 2023-01-25T17:01:19Z | 125 | 26 | null | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"license:other",
"layout-segmentation",
"COCO",
"document-understanding",
"PDF",
"region:us"
] | 2023-01-25T17:01:19Z | 2023-01-17T07:51:59.000Z | 2023-01-17T07:51:59 | ---
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet
size_categories:
- 10K<n<100K
tags:
- layout-segmentation
- COCO
- document-understanding
- PDF
task_categories:
- object-detection
- image-segmentation
task_ids:
- instance-segmentation
---
# Dataset Card for DocLayNet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset.
| [
-0.5140582323074341,
-0.31942063570022583,
0.42073848843574524,
0.12970402836799622,
-0.18234288692474365,
-0.060541365295648575,
-0.03071262687444687,
-0.3309459686279297,
0.34180498123168945,
0.5105898976325989,
-0.41098082065582275,
-0.9464287161827087,
-0.5064260959625244,
0.0012695237... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/samantha-data | ehartford | 2023-10-14T21:30:22Z | 125 | 74 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-14T21:30:22Z | 2023-05-31T07:53:06.000Z | 2023-05-31T07:53:06 | ---
license: apache-2.0
---
# samantha-data
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
## Installation
```
yarn install
```
## Usage
1. Create a `.env` file in the root directory of the project and add the following:
```
OPENAI_API_KEY=<your api key>
```
2. Run the application
```
npx ts-node --files src/index.ts --subject random --out data/random_conversations.jsonl
```
the subjects I provided include:
- random
- advice
- cot
- flirty
- howto
- joke
- math
- philosophy
- foundational
- recipe
- therapy
- troll
you can easily add your own in src/index.ts
## Scale
The application can be scaled by running multiple instances of the application in parallel. I recommend outputting to a different file for each instance, to prevent collision. I usually have one for each subject, about 5 or 6 instances at a time.
| [
-0.3740198612213135,
-0.5271826386451721,
0.946316659450531,
-0.0649200975894928,
-0.43132200837135315,
-0.037120521068573,
-0.033608801662921906,
-0.4209607243537903,
0.7845309376716614,
0.3501271903514862,
-0.7896422147750854,
-0.264837384223938,
-0.46940094232559204,
0.07889014482498169... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/ami-sdm-timestamped | distil-whisper | 2023-09-25T10:30:13Z | 125 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-09-22T09:05:02.000Z | 2023-09-22T09:05:02 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: AMI SDM
---
# Distil Whisper: AMI SDM With Timestamps
This is a variant of the [AMI SDM](https://huggingface.co/datasets/edinburghstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
| [
-0.23662836849689484,
-0.5948396325111389,
0.3619542419910431,
0.43408632278442383,
-0.2821807563304901,
0.08350877463817596,
-0.04241911321878433,
-0.19083860516548157,
0.4544191360473633,
0.5034738183021545,
-0.8748655319213867,
-0.551572859287262,
-0.6592729687690735,
0.0353428907692432... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/tedlium-timestamped | distil-whisper | 2023-09-25T10:30:13Z | 125 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-nd-3.0",
"region:us"
] | 2023-09-25T10:30:13Z | 2023-09-22T09:05:11.000Z | 2023-09-22T09:05:11 | ---
license: cc-by-nc-nd-3.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: TEDLIUM
---
# Distil Whisper: TEDLIUM With Timestamps
This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/LIUM/tedlium).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-nc-nd-3.0.
| [
-0.029116706922650337,
-0.6626327037811279,
0.30900898575782776,
0.4363738000392914,
-0.1744905412197113,
0.13172368705272675,
-0.19648143649101257,
-0.2385011911392212,
0.4028582274913788,
0.36625587940216064,
-0.9018761515617371,
-0.5384277701377869,
-0.5313939452171326,
0.10319443792104... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eunbinni/ola_llama2_13B_t0_data | eunbinni | 2023-11-02T08:31:04Z | 125 | 0 | null | [
"region:us"
] | 2023-11-02T08:31:04Z | 2023-11-02T08:30:16.000Z | 2023-11-02T08:30:16 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1488820093
num_examples: 1185577
download_size: 856591874
dataset_size: 1488820093
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ola_llama2_13B_t0_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.40368571877479553,
-0.33520281314849854,
0.3294734060764313,
0.43319252133369446,
-0.42678219079971313,
0.03543195500969887,
0.3922155797481537,
-0.27773261070251465,
0.8914424180984497,
0.5675889849662781,
-0.6995537281036377,
-0.8674411773681641,
-0.6024252772331238,
-0.21222804486751... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allegro/abusive-clauses-pl-en | allegro | 2023-11-05T13:57:05Z | 125 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:pl",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-11-05T13:57:05Z | 2023-11-05T13:55:20.000Z | 2023-11-05T13:55:20 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- pl
- en
pretty_name: PAC translated to English
size_categories:
- n<1K
---
All instances from the `laugustyniak/abusive-clauses-pl` (train, val, test) translated to English with Google Translate API.
Columns:
- `source` - text instance in Polish.
- `target` - text instance in English. | [
0.0887143611907959,
-0.9477270841598511,
0.7873664498329163,
0.31596845388412476,
-0.6185065507888794,
-0.09924868494272232,
-0.04425257444381714,
-0.4323594570159912,
0.2423514425754547,
0.8981000781059265,
-0.9705216288566589,
-0.4895234704017639,
-0.5495536923408508,
0.6942126154899597,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
conceptual_12m | null | 2022-11-03T16:31:22Z | 124 | 12 | cc12m | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2102.08981",
"region:us"
] | 2022-11-03T16:31:22Z | 2022-04-15T08:06:58.000Z | 2022-04-15T08:06:58 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: cc12m
pretty_name: Conceptual 12M
dataset_info:
features:
- name: image_url
dtype: string
- name: caption
dtype: string
splits:
- name: train
num_bytes: 2794168030
num_examples: 12423374
download_size: 2707204412
dataset_size: 2794168030
---
# Dataset Card for Conceptual 12M
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m)
- **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981)
- **Point of Contact:** [Conceptual Captions e-mail](mailto:conceptual-captions@google.com)
### Dataset Summary
Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.
Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("conceptual_12m")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
### Supported Tasks and Leaderboards
- `image-captioning`: This dataset can be used to train model for the Image Captioning task.
### Languages
All captions are in English.
## Dataset Structure
### Data Instances
Each instance represents a single image with a caption:
```
{
'image_url': 'http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800',
'caption': 'a very typical bus station'
}
```
### Data Fields
- `image_url`: Static URL for downloading the image associated with the post.
- `caption`: Textual description of the image.
### Data Splits
There is only training data, with a total of 12423374 rows
## Dataset Creation
### Curation Rationale
Conceptual 12M shares the same pipeline with Conceptual Captions (CC3M), but relaxes some processing steps.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> To arrive at CC12M, we keep
the image-text filtering intact, and relax the unimodal filters only. First, for image-based filtering, we set the maximum ratio of larger to smaller dimension to 2.5 instead of 2.
We still keep only JPEG images with size greater than
400 pixels, and still exclude images that trigger pornography detectors. Second, in text-based filtering, we allow text
between 3 and 256 words in the alt-text. We still discard
candidates with no noun or no determiner, but permit ones
without prepositions. We discard the heuristics regarding
high unique-word ratio covering various POS tags and word
capitalization. We set the maximum fraction of word repetition allowed to 0.2. Given a larger pool of text due to the
above relaxations, the threshold for counting a word type as
rare is increased from 5 to 20
> The main motivation for CC3M to
perform text transformation is that a majority of candidate
captions contain ultrafine-grained entities such as proper
names (people, venues, locations, etc.), making it extremely
difficult to learn as part of the image captioning task. In
contrast, we are not restricted by the end task of image caption generation. Our intuition is that relatively more difficult pre-training data would lead to better transferability.
We thus do not perform hypernimization or digit substitution. [...] The only exception to the “keep alt-texts as
raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
of the individuals in these images. For this step, we use the
Google Cloud Natural Language APIs to detect all named
entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
are transformed in this fashion.
#### Who are the source language producers?
Not specified.
### Annotations
#### Annotation process
Annotations are extracted jointly with the images using the automatic pipeline.
#### Who are the annotators?
Not specified.
### Personal and Sensitive Information
From the paper:
> The only exception to the “keep alt-texts as
raw as possible” rule is performing person-name substitutions, which we identify as necessary to protect the privacy
of the individuals in these images. For this step, we use the
Google Cloud Natural Language APIs to detect all named
entities of type Person, and substitute them by a special token <PERSON>. Around 25% of all the alt-texts in CC12M
are transformed in this fashion.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{changpinyo2021cc12m,
title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle = {CVPR},
year = {2021},
}
```
### Contributions
Thanks to [@thomasw21](https://github.com/thomasw21) for adding this dataset. | [
-0.5949119925498962,
-0.5226930975914001,
0.2687322795391083,
0.1664106696844101,
-0.6633783578872681,
-0.06303403526544571,
-0.3411983251571655,
-0.5928844213485718,
0.08254191279411316,
0.5333383083343506,
-0.7098103761672974,
-0.6799867749214172,
-0.6169451475143433,
0.24573475122451782... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlosejimenez/mscoco_train_2014_openai_clip-vit-base-patch32_image_image_retrieval_pairs_2022-09-15 | carlosejimenez | 2022-09-20T00:44:03Z | 124 | 0 | null | [
"region:us"
] | 2022-09-20T00:44:03Z | 2022-09-15T14:17:49.000Z | 2022-09-15T14:17:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-450000-500000 | tomekkorbak | 2022-10-03T19:48:41Z | 124 | 0 | null | [
"region:us"
] | 2022-10-03T19:48:41Z | 2022-10-03T19:48:33.000Z | 2022-10-03T19:48:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Muennighoff/natural-instructions | Muennighoff | 2022-12-23T20:08:44Z | 124 | 23 | null | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"region:us"
] | 2022-12-23T20:08:44Z | 2022-12-17T21:45:01.000Z | 2022-12-17T21:45:01 | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
task_categories:
- other
---
Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
Train Tasks:
```
['task001_quoref_question_generation', 'task002_quoref_answer_generation', 'task022_cosmosqa_passage_inappropriate_binary', 'task023_cosmosqa_question_generation', 'task024_cosmosqa_answer_generation', 'task025_cosmosqa_incorrect_answer_generation', 'task026_drop_question_generation', 'task027_drop_answer_type_generation', 'task028_drop_answer_generation', 'task043_essential_terms_answering_incomplete_questions', 'task044_essential_terms_identifying_essential_words', 'task045_miscellaneous_sentence_paraphrasing', 'task046_miscellaneous_question_typing', 'task047_miscellaneous_answering_science_questions', 'task059_ropes_story_generation', 'task060_ropes_question_generation', 'task061_ropes_answer_generation', 'task062_bigbench_repeat_copy_logic', 'task063_first_i_elements', 'task064_all_elements_except_first_i', 'task065_timetravel_consistent_sentence_classification', 'task066_timetravel_binary_consistency_classification', 'task067_abductivenli_answer_generation', 'task068_abductivenli_incorrect_answer_generation', 'task069_abductivenli_classification', 'task070_abductivenli_incorrect_classification', 'task071_abductivenli_answer_generation', 'task072_abductivenli_answer_generation', 'task073_commonsenseqa_answer_generation', 'task074_squad1.1_question_generation', 'task075_squad1.1_answer_generation', 'task076_splash_correcting_sql_mistake', 'task077_splash_explanation_to_sql', 'task078_all_elements_except_last_i', 'task079_conala_concat_strings', 'task080_piqa_answer_generation', 'task081_piqa_wrong_answer_generation', 'task082_babi_t1_single_supporting_fact_question_generation', 'task083_babi_t1_single_supporting_fact_answer_generation', 'task084_babi_t1_single_supporting_fact_identify_relevant_fact', 'task085_unnatural_addsub_arithmetic', 'task087_new_operator_addsub_arithmetic', 'task088_identify_typo_verification', 'task089_swap_words_verification', 'task090_equation_learner_algebra', 'task091_all_elements_from_index_i_to_j', 'task092_check_prime_classification', 'task093_conala_normalize_lists', 'task094_conala_calculate_mean', 'task095_conala_max_absolute_value', 'task096_conala_list_index_subtraction', 'task097_conala_remove_duplicates', 'task098_conala_list_intersection', 'task099_reverse_elements_between_index_i_and_j', 'task100_concatenate_all_elements_from_index_i_to_j', 'task101_reverse_and_concatenate_all_elements_from_index_i_to_j', 'task103_facts2story_long_text_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task105_story_cloze-rocstories_sentence_generation', 'task107_splash_question_to_sql', 'task1087_two_number_sum', 'task1088_array_of_products', 'task1089_check_monotonic_array', 'task108_contextualabusedetection_classification', 'task109_smsspamcollection_spamsmsdetection', 'task110_logic2text_sentence_generation', 'task111_asset_sentence_simplification', 'task112_asset_simple_sentence_identification', 'task1135_xcsr_en_commonsense_mc_classification', 'task113_count_frequency_of_letter', 'task1146_country_capital', 'task1147_country_currency', 'task1148_maximum_ascii_value', 'task1149_item_check_edible', 'task114_is_the_given_word_longest', 'task1150_delete_max_min', 'task1151_swap_max_min', 'task115_help_advice_classification', 'task1167_penn_treebank_coarse_pos_tagging', 'task1168_brown_coarse_pos_tagging', 'task116_com2sense_commonsense_reasoning', 'task1186_nne_hrngo_classification', 'task1188_count_max_freq_char', 'task1189_check_char_in_string', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1190_add_integer_to_list', 'task1191_food_veg_nonveg', 'task1192_food_flavor_profile', 'task1193_food_course_classification', 'task1194_kth_largest_element', 'task1196_atomic_classification_oeffect', 'task1197_atomic_classification_oreact', 'task1198_atomic_classification_owant', 'task1199_atomic_classification_xattr', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task1200_atomic_classification_xeffect', 'task1201_atomic_classification_xintent', 'task1202_atomic_classification_xneed', 'task1203_atomic_classification_xreact', 'task1204_atomic_classification_hinderedby', 'task1205_atomic_classification_isafter', 'task1206_atomic_classification_isbefore', 'task1207_atomic_classification_atlocation', 'task1208_atomic_classification_xreason', 'task1209_atomic_classification_objectuse', 'task1210_atomic_classification_madeupof', 'task1211_atomic_classification_hassubevent', 'task1212_atomic_classification_hasproperty', 'task1213_atomic_classification_desires', 'task1214_atomic_classification_xwant', 'task1215_atomic_classification_capableof', 'task1216_atomic_classification_causes', 'task1217_atomic_answer_generation', 'task122_conala_list_index_addition', 'task123_conala_sort_dictionary', 'task124_conala_pair_averages', 'task125_conala_pair_differences', 'task126_scan_structured_text_generation_command_action_all', 'task127_scan_long_text_generation_action_command_all', 'task1283_hrngo_quality_classification', 'task1284_hrngo_informativeness_classification', 'task1285_kpa_keypoint_matching', 'task1286_openbookqa_question_answering', 'task1288_glue_mrpc_paraphrasing', 'task1289_trec_classification', 'task128_scan_structured_text_generation_command_action_short', 'task1290_xsum_summarization', 'task1291_multi_news_summarization', 'task1292_yelp_review_full_text_categorization', 'task1293_kilt_tasks_hotpotqa_question_answering', 'task1294_wiki_qa_answer_verification', 'task1295_adversarial_qa_question_answering', 'task1296_wiki_hop_question_answering', 'task129_scan_long_text_generation_action_command_short', 'task1308_amazonreview_category_classification', 'task1309_amazonreview_summary_classification', 'task130_scan_structured_text_generation_command_action_long', 'task1310_amazonreview_rating_classification', 'task1311_amazonreview_rating_classification', 'task1312_amazonreview_polarity_classification', 'task1313_amazonreview_polarity_classification', 'task1314_country_abbreviation', 'task1315_find_range_array', 'task1316_remove_duplicates_string', 'task1317_country_calling_code', 'task1318_country_national_dish', 'task1319_country_by_barcode_prefix', 'task131_scan_long_text_generation_action_command_long', 'task1320_country_domain_tld', 'task1321_country_continent', 'task1322_country_government_type', 'task1325_qa_zre_question_generation_on_subject_relation', 'task1326_qa_zre_question_generation_from_answer', 'task1327_qa_zre_answer_generation_from_question', 'task1328_qa_zre_relation_generation_from_question', 'task132_dais_text_modification', 'task1331_reverse_array', 'task1332_check_leap_year', 'task1333_check_validity_date_ddmmyyyy', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task1340_msr_text_compression_compression', 'task1341_msr_text_classification', 'task1346_glue_cola_grammatical_correctness_classification', 'task1347_glue_sts-b_similarity_classification', 'task1354_sent_comp_classification', 'task1355_sent_comp_summarization', 'task1359_numer_sense_answer_generation', 'task1360_numer_sense_multiple_choice_qa_generation', 'task1361_movierationales_classification', 'task1364_hans_answer_generation', 'task1366_healthfact_classification', 'task1368_healthfact_sentence_generation', 'task1369_healthfact_sentence_generation', 'task1378_quarel_correct_answer_generation', 'task1379_quarel_incorrect_answer_generation', 'task137_detoxifying-lms_classification_toxicity', 'task1380_quarel_correct_option_generation', 'task1381_quarel_incorrect_option_generation', 'task1382_quarel_write_correct_answer', 'task1383_quarel_write_incorrect_answer', 'task1384_deal_or_no_dialog_classification', 'task1389_hellaswag_completion', 'task138_detoxifying-lms_classification_fluency', 'task1398_obqa_question_generation', 'task1399_obqa_answer_generation', 'task139_detoxifying-lms_classification_topicality', 'task1400_obqa_incorrect_answer_generation', 'task1401_obqa_sentence_generation', 'task1403_check_validity_date_mmddyyyy', 'task1404_date_conversion', 'task1405_find_median', 'task1406_kth_smallest_element', 'task140_detoxifying-lms_classification_style', 'task1412_web_questions_question_answering', 'task1418_bless_semantic_relation_classification', 'task1419_mathqa_gain', 'task141_odd-man-out_classification_category', 'task1420_mathqa_general', 'task1421_mathqa_other', 'task1422_mathqa_physics', 'task1423_mathqa_geometry', 'task1424_mathqa_probability', 'task1425_country_iso_numeric', 'task1426_country_independence_year', 'task1427_country_region_in_world', 'task1428_country_surface_area', 'task1429_evalution_semantic_relation_classification', 'task142_odd-man-out_classification_no_category', 'task1431_head_qa_answer_generation', 'task1434_head_qa_classification', 'task143_odd-man-out_classification_generate_category', 'task1443_string_to_number', 'task1444_round_power_of_two', 'task1445_closest_integers', 'task1446_farthest_integers', 'task1447_drug_extraction_ade', 'task1448_disease_entity_extraction_ncbi_dataset', 'task1449_disease_entity_extraction_bc5cdr_dataset', 'task144_subjqa_question_answering', 'task1451_drug_dose_extraction', 'task1452_location_entity_extraction_btc_corpus', 'task1453_person_entity_extraction_btc_corpus', 'task145_afs_argument_similarity_death_penalty', 'task146_afs_argument_similarity_gun_control', 'task1479_organization_entity_extraction_btc_corpus', 'task147_afs_argument_similarity_gay_marriage', 'task1480_gene_extraction_jnlpba_dataset', 'task1481_gene_extraction_bc2gm_dataset', 'task1482_gene_extraction_chemprot_dataset', 'task1483_chemical_extraction_chemprot_dataset', 'task1484_gene_extraction_linnaeus_dataset', 'task1485_organ_extraction_anem_dataset', 'task1486_cell_extraction_anem_dataset', 'task1487_organism_substance_extraction_anem_dataset', 'task1488_sarcasmdetection_headline_classification', 'task1489_sarcasmdetection_tweet_classification', 'task148_afs_argument_quality_gay_marriage', 'task1495_adverse_drug_event_classification', 'task1498_24hour_to_12hour_clock', 'task1499_dstc3_summarization', 'task149_afs_argument_quality_death_penalty', 'task1500_dstc3_classification', 'task1501_dstc3_answer_generation', 'task1502_hatexplain_classification', 'task1503_hatexplain_classification', 'task1504_hatexplain_answer_generation', 'task1505_root09_semantic_relation_classification', 'task1506_celebrity_minimal_dob_span', 'task1507_boolean_temporal_reasoning', 'task1508_wordnet_antonyms', 'task1509_evalution_antonyms', 'task150_afs_argument_quality_gun_control', 'task1510_evalution_relation_extraction', 'task1517_limit_classfication', 'task1518_limit_answer_generation', 'task1519_qa_srl_question_generation', 'task151_tomqa_find_location_easy_clean', 'task1520_qa_srl_answer_generation', 'task152_tomqa_find_location_easy_noise', 'task153_tomqa_find_location_hard_clean', 'task1541_agnews_classification', 'task1542_every_ith_element_from_starting', 'task1548_wiqa_binary_classification', 'task1549_wiqa_answer_generation_missing_step', 'task154_tomqa_find_location_hard_noise', 'task1551_every_ith_element_from_kth_element', 'task1553_cnn_dailymail_summarization', 'task1559_blimp_binary_classification', 'task155_count_nouns_verbs', 'task1560_blimp_binary_classification', 'task1564_triviaqa_answer_generation', 'task1565_triviaqa_classification', 'task1566_propara_structured_text_generation', 'task1567_propara_question_generation', 'task1568_propara_classification', 'task156_codah_classification_adversarial', 'task1572_samsum_summary', 'task1573_samsum_classification', 'task157_count_vowels_and_consonants', 'task1580_eqasc-perturbed_question_generation', 'task1581_eqasc-perturbed_answer_generation', 'task1582_bless_hypernym_generation', 'task1583_bless_meronym_classification', 'task1584_evalution_meronym_classification', 'task1585_root09_hypernym_generation', 'task158_count_frequency_of_words', 'task1590_diplomacy_text_generation', 'task1592_yahoo_answers_topics_classfication', 'task1593_yahoo_answers_topics_classification', 'task1594_yahoo_answers_topics_question_generation', 'task1595_event2mind_text_generation_1', 'task1596_event2mind_text_generation_2', 'task1599_smcalflow_classification', 'task159_check_frequency_of_words_in_sentence_pair', 'task1600_smcalflow_sentence_generation', 'task1601_webquestions_answer_generation', 'task1602_webquestion_question_genreation', 'task1603_smcalflow_sentence_generation', 'task1604_ethos_text_classification', 'task1605_ethos_text_classification', 'task1606_ethos_text_classification', 'task1607_ethos_text_classification', 'task1608_xquad_en_answer_generation', 'task1609_xquad_en_question_generation', 'task160_replace_letter_in_a_sentence', 'task161_count_words_containing_letter', 'task162_count_words_starting_with_letter', 'task163_count_words_ending_with_letter', 'task1645_medical_question_pair_dataset_text_classification', 'task164_mcscript_question_answering_text', 'task1656_gooaq_answer_generation', 'task1657_gooaq_question_generation', 'task165_mcscript_question_answering_commonsense', 'task1660_super_glue_question_generation', 'task1661_super_glue_classification', 'task1665_trainglecopa_question_generation', 'task1669_md_gender_bias_text_modification', 'task166_clariq_sentence_generation', 'task1670_md_gender_bias_text_modification', 'task1678_mathqa_answer_selection', 'task167_strategyqa_question_generation', 'task168_strategyqa_question_decomposition', 'task169_strategyqa_sentence_generation', 'task1703_ljspeech_textmodification', 'task1704_ljspeech_textmodification', 'task1705_ljspeech_classification', 'task1706_ljspeech_classification', 'task170_hotpotqa_answer_generation', 'task1711_poki_text_generation', 'task1712_poki_classification', 'task1713_convai3_sentence_generation', 'task1714_convai3_sentence_generation', 'task1720_civil_comments_toxicity_classification', 'task1721_civil_comments_obscenity_classification', 'task1722_civil_comments_threat_classification', 'task1723_civil_comments_sexuallyexplicit_classification', 'task1724_civil_comments_insult_classification', 'task1725_civil_comments_severtoxicity_classification', 'task1726_mathqa_correct_answer_generation', 'task1727_wiqa_what_is_the_effect', 'task1729_personachat_generate_next', 'task1730_personachat_choose_next', 'task1731_quartz_question_answering', 'task176_break_decompose_questions', 'task177_para-nmt_paraphrasing', 'task178_quartz_question_answering', 'task179_participant_extraction', 'task180_intervention_extraction', 'task181_outcome_extraction', 'task182_duorc_question_generation', 'task183_rhyme_generation', 'task184_break_generate_question', 'task191_hotpotqa_question_generation', 'task192_hotpotqa_sentence_generation', 'task193_duorc_question_generation', 'task194_duorc_answer_generation', 'task195_sentiment140_classification', 'task196_sentiment140_answer_generation', 'task205_remove_even_elements', 'task206_collatz_conjecture', 'task207_max_element_lists', 'task208_combinations_of_list', 'task209_stancedetection_classification', 'task210_logic2text_structured_text_generation', 'task211_logic2text_classification', 'task212_logic2text_classification', 'task223_quartz_explanation_generation', 'task227_clariq_classification', 'task228_arc_answer_generation_easy', 'task229_arc_answer_generation_hard', 'task243_count_elements_in_set_intersection', 'task244_count_elements_in_set_union', 'task245_check_presence_in_set_intersection', 'task246_dream_question_generation', 'task247_dream_answer_generation', 'task248_dream_classification', 'task267_concatenate_and_reverse_all_elements_from_index_i_to_j', 'task268_casehold_legal_answer_generation', 'task269_csrg_counterfactual_story_generation', 'task270_csrg_counterfactual_context_generation', 'task274_overruling_legal_classification', 'task275_enhanced_wsc_paraphrase_generation', 'task276_enhanced_wsc_classification', 'task277_stereoset_sentence_generation_stereotype', 'task278_stereoset_sentence_generation_antistereotype', 'task279_stereoset_classification_stereotype', 'task280_stereoset_classification_stereotype_type', 'task283_dream_incorrect_answer_generation', 'task284_imdb_classification', 'task285_imdb_answer_generation', 'task286_olid_offense_judgment', 'task287_casehold_legal_incorrect_answer_generation', 'task291_semeval_2020_task4_commonsense_validation', 'task292_storycommonsense_character_text_generation', 'task293_storycommonsense_emotion_text_generation', 'task294_storycommonsense_motiv_text_generation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task296_storycloze_correct_end_classification', 'task297_storycloze_incorrect_end_classification', 'task298_storycloze_correct_end_classification', 'task299_storycloze_sentence_generation', 'task300_storycloze_order_generation', 'task301_record_question_generation', 'task302_record_classification', 'task303_record_incorrect_answer_generation', 'task305_jeopardy_answer_generation_normal', 'task306_jeopardy_answer_generation_double', 'task307_jeopardy_answer_generation_final', 'task308_jeopardy_answer_generation_all', 'task309_race_answer_generation', 'task310_race_classification', 'task311_race_question_generation', 'task316_crows-pairs_classification_stereotype', 'task317_crows-pairs_classification_stereotype_type', 'task318_stereoset_classification_gender', 'task319_stereoset_classification_profession', 'task320_stereoset_classification_race', 'task321_stereoset_classification_religion', 'task322_jigsaw_classification_threat', 'task323_jigsaw_classification_sexually_explicit', 'task324_jigsaw_classification_disagree', 'task325_jigsaw_classification_identity_attack', 'task326_jigsaw_classification_obscene', 'task327_jigsaw_classification_toxic', 'task328_jigsaw_classification_insult', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task339_record_answer_generation', 'task340_winomt_classification_gender_pro', 'task341_winomt_classification_gender_anti', 'task342_winomt_classification_profession_pro', 'task343_winomt_classification_profession_anti', 'task344_hybridqa_answer_generation', 'task345_hybridqa_answer_generation', 'task346_hybridqa_classification', 'task347_hybridqa_incorrect_answer_generation', 'task350_winomt_classification_gender_identifiability_pro', 'task351_winomt_classification_gender_identifiability_anti', 'task353_casino_classification_negotiation_elicit_pref', 'task354_casino_classification_negotiation_no_need', 'task355_casino_classification_negotiation_other_need', 'task356_casino_classification_negotiation_self_need', 'task357_casino_classification_negotiation_small_talk', 'task358_casino_classification_negotiation_uv_part', 'task359_casino_classification_negotiation_vouch_fair', 'task363_sst2_polarity_classification', 'task364_regard_social_impact_classification', 'task365_synthetic_remove_vowels', 'task366_synthetic_return_primes', 'task367_synthetic_remove_floats', 'task368_synthetic_even_or_odd_calculation', 'task369_synthetic_remove_odds', 'task370_synthetic_remove_divisible_by_3', 'task371_synthetic_product_of_list', 'task372_synthetic_palindrome_numbers', 'task373_synthetic_round_tens_place', 'task374_synthetic_pos_or_neg_calculation', 'task375_classify_type_of_sentence_in_debate', 'task376_reverse_order_of_words', 'task377_remove_words_of_given_length', 'task378_reverse_words_of_given_length', 'task379_agnews_topic_classification', 'task380_boolq_yes_no_question', 'task381_boolq_question_generation', 'task382_hybridqa_answer_generation', 'task383_matres_classification', 'task384_socialiqa_question_classification', 'task385_socialiqa_incorrect_answer_generation', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task388_torque_token_classification', 'task389_torque_generate_temporal_question', 'task390_torque_text_span_selection', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task400_paws_paraphrase_classification', 'task403_creak_commonsense_inference', 'task405_narrativeqa_question_generation', 'task413_mickey_en_sentence_perturbation_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task453_swag_answer_generation', 'task454_swag_incorrect_answer_generation', 'task455_swag_context_generation', 'task456_matres_intention_classification', 'task457_matres_conditional_classification', 'task458_matres_negation_classification', 'task459_matres_static_classification', 'task460_qasper_answer_generation', 'task461_qasper_question_generation', 'task462_qasper_classification', 'task469_mrqa_answer_generation', 'task470_mrqa_question_generation', 'task471_haspart_answer_generation', 'task472_haspart_classification', 'task475_yelp_polarity_classification', 'task476_cls_english_books_classification', 'task477_cls_english_dvd_classification', 'task478_cls_english_music_classification', 'task488_extract_all_alphabetical_elements_from_list_in_order', 'task489_mwsc_question_generation', 'task490_mwsc_options_generation', 'task491_mwsc_answer_generation', 'task492_mwsc_incorrect_answer_generation', 'task493_review_polarity_classification', 'task494_review_polarity_answer_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task497_extract_all_numbers_from_list_in_order', 'task499_extract_and_add_all_numbers_from_list', 'task504_count_all_alphabetical_elements_in_list', 'task505_count_all_numerical_elements_in_list', 'task506_position_of_all_alphabetical_elements_in_list', 'task507_position_of_all_numerical_elements_in_list', 'task509_collate_of_all_alphabetical_and_numerical_elements_in_list_separately', 'task512_twitter_emotion_classification', 'task513_argument_stance_classification', 'task514_argument_consequence_classification', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task517_emo_classify_emotion_of_dialogue', 'task518_emo_different_dialogue_emotions', 'task521_trivia_question_classification', 'task522_news_editorial_summary', 'task523_find_if_numbers_or_alphabets_are_more_in_list', 'task547_alt_translation_entk_en', 'task550_discofuse_sentence_generation', 'task560_alt_translation_en_entk', 'task563_discofuse_answer_generation', 'task564_discofuse_classification', 'task565_circa_answer_generation', 'task566_circa_classification', 'task567_circa_text_generation', 'task568_circa_question_generation', 'task573_air_dialogue_classification', 'task574_air_dialogue_sentence_generation', 'task575_air_dialogue_classification', 'task576_curiosity_dialogs_answer_generation', 'task577_curiosity_dialogs_classification', 'task578_curiosity_dialogs_answer_generation', 'task579_socialiqa_classification', 'task580_socialiqa_answer_generation', 'task581_socialiqa_question_generation', 'task582_naturalquestion_answer_generation', 'task583_udeps_eng_coarse_pos_tagging', 'task584_udeps_eng_fine_pos_tagging', 'task585_preposition_classification', 'task586_amazonfood_polarity_classification', 'task587_amazonfood_polarity_correction_classification', 'task588_amazonfood_rating_classification', 'task589_amazonfood_summary_text_generation', 'task590_amazonfood_summary_correction_classification', 'task591_sciq_answer_generation', 'task592_sciq_incorrect_answer_generation', 'task593_sciq_explanation_generation', 'task594_sciq_question_generation', 'task595_mocha_answer_generation', 'task596_mocha_question_generation', 'task597_cuad_answer_generation', 'task598_cuad_answer_generation', 'task599_cuad_question_generation', 'task600_find_the_longest_common_substring_in_two_strings', 'task605_find_the_longest_common_subsequence_in_two_lists', 'task606_sum_of_all_numbers_in_list_between_positions_i_and_j', 'task607_sbic_intentional_offense_binary_classification', 'task608_sbic_sexual_offense_binary_classification', 'task609_sbic_potentially_offense_binary_classification', 'task610_conllpp_ner', 'task611_mutual_multi_turn_dialogue', 'task615_moviesqa_answer_generation', 'task616_cola_classification', 'task617_amazonreview_category_text_generation', 'task618_amazonreview_summary_text_generation', 'task622_replace_alphabets_in_a_list_by_their_position_in_english_alphabet', 'task625_xlwic_true_or_false_answer_generation', 'task626_xlwic_sentence_based_on_given_word_sentence_generation', 'task627_xlwic_word_with_same_meaning_sentence_generation', 'task628_xlwic_word_with_different_meaning_sentence_generation', 'task629_dbpedia_14_classification', 'task630_dbpedia_14_classification', 'task631_dbpedia_14_incorrect_answer_generation', 'task632_dbpedia_14_classification', 'task633_dbpedia_14_answer_generation', 'task636_extract_and_sort_unique_alphabets_in_a_list', 'task637_extract_and_sort_unique_digits_in_a_list', 'task638_multi_woz_classification', 'task639_multi_woz_user_utterance_generation', 'task649_race_blank_question_generation', 'task664_mmmlu_answer_generation_abstract_algebra', 'task665_mmmlu_answer_generation_anatomy', 'task666_mmmlu_answer_generation_astronomy', 'task667_mmmlu_answer_generation_business_ethics', 'task668_extreme_abstract_summarization', 'task672_amazon_and_yelp_summarization_dataset_summarization', 'task672_nummersense', 'task673_google_wellformed_query_classification', 'task674_google_wellformed_query_sentence_generation', 'task675_google_wellformed_query_sentence_generation', 'task679_hope_edi_english_text_classification', 'task681_hope_edi_malayalam_text_classification', 'task682_online_privacy_policy_text_classification', 'task683_online_privacy_policy_text_purpose_answer_generation', 'task684_online_privacy_policy_text_information_type_generation', 'task685_mmmlu_answer_generation_clinical_knowledge', 'task686_mmmlu_answer_generation_college_biology', 'task687_mmmlu_answer_generation_college_chemistry', 'task688_mmmlu_answer_generation_college_computer_science', 'task689_mmmlu_answer_generation_college_mathematics', 'task690_mmmlu_answer_generation_college_medicine', 'task691_mmmlu_answer_generation_college_physics', 'task692_mmmlu_answer_generation_computer_security', 'task693_mmmlu_answer_generation_conceptual_physics', 'task694_mmmlu_answer_generation_econometrics', 'task695_mmmlu_answer_generation_electrical_engineering', 'task696_mmmlu_answer_generation_elementary_mathematics', 'task697_mmmlu_answer_generation_formal_logic', 'task698_mmmlu_answer_generation_global_facts', 'task699_mmmlu_answer_generation_high_school_biology', 'task700_mmmlu_answer_generation_high_school_chemistry', 'task701_mmmlu_answer_generation_high_school_computer_science', 'task702_mmmlu_answer_generation_high_school_european_history', 'task703_mmmlu_answer_generation_high_school_geography', 'task704_mmmlu_answer_generation_high_school_government_and_politics', 'task705_mmmlu_answer_generation_high_school_macroeconomics', 'task706_mmmlu_answer_generation_high_school_mathematics', 'task707_mmmlu_answer_generation_high_school_microeconomics', 'task708_mmmlu_answer_generation_high_school_physics', 'task709_mmmlu_answer_generation_high_school_psychology', 'task710_mmmlu_answer_generation_high_school_statistics', 'task711_mmmlu_answer_generation_high_school_us_history', 'task712_mmmlu_answer_generation_high_school_world_history', 'task713_mmmlu_answer_generation_human_aging', 'task714_mmmlu_answer_generation_human_sexuality', 'task715_mmmlu_answer_generation_international_law', 'task716_mmmlu_answer_generation_jurisprudence', 'task717_mmmlu_answer_generation_logical_fallacies', 'task718_mmmlu_answer_generation_machine_learning', 'task719_mmmlu_answer_generation_management', 'task720_mmmlu_answer_generation_marketing', 'task721_mmmlu_answer_generation_medical_genetics', 'task722_mmmlu_answer_generation_random_topic', 'task723_mmmlu_answer_generation_moral_disputes', 'task724_mmmlu_answer_generation_moral_scenarios', 'task725_mmmlu_answer_generation_nutrition', 'task726_mmmlu_answer_generation_philosophy', 'task727_mmmlu_answer_generation_prehistory', 'task728_mmmlu_answer_generation_professional_accounting', 'task729_mmmlu_answer_generation_professional_law', 'task730_mmmlu_answer_generation_professional_medicine', 'task731_mmmlu_answer_generation_professional_psychology', 'task732_mmmlu_answer_generation_public_relations', 'task733_mmmlu_answer_generation_security_studies', 'task734_mmmlu_answer_generation_sociology', 'task735_mmmlu_answer_generation_us_foreign_policy', 'task736_mmmlu_answer_generation_virology', 'task737_mmmlu_answer_generation_world_religions', 'task739_lhoestq_question_generation', 'task740_lhoestq_answer_generation_quantity', 'task741_lhoestq_answer_generation_place', 'task742_lhoestq_answer_generation_frequency', 'task745_ai2_arithmetic_questions_arithmetic', 'task746_yelp_restaurant_review_classification', 'task750_aqua_multiple_choice_answering', 'task751_svamp_subtraction_question_answering', 'task752_svamp_multiplication_question_answering', 'task753_svamp_addition_question_answering', 'task754_svamp_common-division_question_answering', 'task755_find_longest_substring_and_replace_its_sorted_lowercase_version_in_both_lists', 'task756_find_longert_substring_and_return_all_unique_alphabets_in_it', 'task761_app_review_classification', 'task766_craigslist_bargains_classification', 'task767_craigslist_bargains_classification', 'task770_pawsx_english_text_modification', 'task819_pec_sentiment_classification', 'task820_protoqa_answer_generation', 'task821_protoqa_question_generation', 'task823_peixian-rtgender_sentiment_analysis', 'task833_poem_sentiment_classification', 'task834_mathdataset_classification', 'task835_mathdataset_answer_generation', 'task843_financial_phrasebank_classification', 'task844_financial_phrasebank_classification', 'task845_pubmedqa_question_generation', 'task846_pubmedqa_classification', 'task847_pubmedqa_question_generation', 'task848_pubmedqa_classification', 'task849_pubmedqa_answer_generation', 'task850_synthetic_longest_palindrome', 'task851_synthetic_multiply_evens', 'task852_synthetic_multiply_odds', 'task853_hippocorpus_long_text_generation', 'task854_hippocorpus_classification', 'task855_conv_ai_2_classification', 'task856_conv_ai_2_classification', 'task857_inquisitive_question_generation', 'task858_inquisitive_span_detection', 'task859_prost_question_generation', 'task860_prost_mcq_generation', 'task861_asdiv_addsub_question_answering', 'task861_prost_mcq_answers_generation', 'task862_asdiv_multidiv_question_answering', 'task863_asdiv_multiop_question_answering', 'task864_asdiv_singleop_question_answering', 'task865_mawps_addsub_question_answering', 'task866_mawps_multidiv_question_answering', 'task867_mawps_multiop_question_answering', 'task868_cfq_mcd1_explanation_to_sql', 'task868_mawps_singleop_question_answering', 'task869_cfq_mcd1_sql_to_explanation', 'task870_msmarco_answer_generation', 'task871_msmarco_question_generation', 'task874_opus_xhosanavy_sr', 'task875_emotion_classification', 'task886_quail_question_generation', 'task887_quail_answer_generation', 'task888_reviews_classification', 'task889_goemotions_classification', 'task897_freebase_qa_topic_question_generation', 'task898_freebase_qa_answer_generation', 'task899_freebase_qa_topic_generation', 'task900_freebase_qa_category_classification', 'task901_freebase_qa_category_question_generation', 'task902_deceptive_opinion_spam_classification', 'task903_deceptive_opinion_spam_classification', 'task904_hate_speech_offensive_classification', 'task905_hate_speech_offensive_classification', 'task906_dialogre_identify_names', 'task907_dialogre_identify_relationships', 'task908_dialogre_identify_familial_relationships', 'task909_dialogre_prevalent_speakers', 'task917_coqa_question_generation', 'task918_coqa_answer_generation', 'task919_coqa_incorrect_answer_generation', 'task921_code_x_glue_information_retreival', 'task922_event2mind_word_generation', 'task923_event2mind_classifier', 'task924_event2mind_word_generation', 'task925_coached_conv_pref_classifier', 'task926_coached_conv_pref_word_generation', 'task927_yelp_negative_to_positive_style_transfer', 'task928_yelp_positive_to_negative_style_transfer', 'task929_products_reviews_classification', 'task933_wiki_auto_style_transfer', 'task934_turk_simplification', 'task955_wiki_auto_style_transfer', 'task956_leetcode_420_strong_password_check', 'task963_librispeech_asr_next_word_prediction', 'task964_librispeech_asr_text_auto_completion', 'task965_librispeech_asr_missing_word_prediction', 'task966_ruletaker_fact_checking_based_on_given_context', 'task967_ruletaker_incorrect_fact_generation_based_on_given_paragraph']
```
Validation Tasks:
```
['task1333_check_validity_date_ddmmyyyy', 'task1403_check_validity_date_mmddyyyy', 'task291_semeval_2020_task4_commonsense_validation']
```
Test Tasks:
```
['task020_mctaco_span_based_question', 'task033_winogrande_answer_generation', 'task034_winogrande_question_modification_object', 'task035_winogrande_question_modification_person', 'task036_qasc_topic_word_to_generate_related_fact', 'task039_qasc_find_overlapping_words', 'task050_multirc_answerability', 'task102_commongen_sentence_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task1152_bard_analogical_reasoning_causation', 'task1153_bard_analogical_reasoning_affordance', 'task1154_bard_analogical_reasoning_travel', 'task1155_bard_analogical_reasoning_trash_or_treasure', 'task1156_bard_analogical_reasoning_tools', 'task1157_bard_analogical_reasoning_rooms_for_containers', 'task1158_bard_analogical_reasoning_manipulating_items', 'task1159_bard_analogical_reasoning_containers', 'task1161_coda19_title_generation', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1195_disflqa_disfluent_to_fluent_conversion', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task121_zest_text_modification', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task133_winowhy_reason_plausibility_detection', 'task1342_amazon_us_reviews_title', 'task1344_glue_entailment_classification', 'task1345_glue_qqp_question_paraprashing', 'task1356_xlsum_title_generation', 'task1358_xlsum_title_generation', 'task1385_anli_r1_entailment', 'task1386_anli_r2_entailment', 'task1387_anli_r3_entailment', 'task1388_cb_entailment', 'task1390_wscfixed_coreference', 'task1391_winogrande_easy_answer_generation', 'task1393_superglue_copa_text_completion', 'task1394_meta_woz_task_classification', 'task1407_dart_question_generation', 'task1409_dart_text_generation', 'task1429_evalution_semantic_relation_classification', 'task1439_doqa_cooking_isanswerable', 'task1442_doqa_movies_isanswerable', 'task1509_evalution_antonyms', 'task1510_evalution_relation_extraction', 'task1516_imppres_naturallanguageinference', 'task1529_scitail1.1_classification', 'task1531_daily_dialog_type_classification', 'task1533_daily_dialog_formal_classification', 'task1534_daily_dialog_question_classification', 'task1540_parsed_pdfs_summarization', 'task1554_scitail_classification', 'task1557_jfleg_answer_generation', 'task1562_zest_text_modification', 'task1584_evalution_meronym_classification', 'task1586_scifact_title_generation', 'task1598_nyc_long_text_generation', 'task1612_sick_label_classification', 'task1615_sick_tclassify_b_relation_a', 'task1622_disfl_qa_text_modication', 'task1624_disfl_qa_question_yesno_classification', 'task1631_openpi_answer_generation', 'task1640_aqa1.0_answerable_unanswerable_question_classification', 'task1659_title_generation', 'task1664_winobias_text_generation', 'task1728_web_nlg_data_to_text', 'task190_snli_classification', 'task199_mnli_classification', 'task200_mnli_entailment_classification', 'task201_mnli_neutral_classification', 'task202_mnli_contradiction_classification', 'task219_rocstories_title_answer_generation', 'task220_rocstories_title_classification', 'task226_english_language_answer_relevance_classification', 'task232_iirc_link_number_classification', 'task233_iirc_link_exists_classification', 'task242_tweetqa_classification', 'task249_enhanced_wsc_pronoun_disambiguation', 'task281_points_of_correspondence', 'task288_gigaword_summarization', 'task290_tellmewhy_question_answerability', 'task291_semeval_2020_task4_commonsense_validation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task304_numeric_fused_head_resolution', 'task329_gap_classification', 'task330_gap_answer_generation', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task349_squad2.0_answerable_unanswerable_question_classification', 'task362_spolin_yesand_prompt_response_sub_classification', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task391_causal_relationship', 'task392_inverse_causal_relationship', 'task393_plausible_result_generation', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task401_numeric_fused_head_reference', 'task402_grailqa_paraphrase_generation', 'task418_persent_title_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task442_com_qa_paraphrase_question_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task500_scruples_anecdotes_title_generation', 'task510_reddit_tifu_title_summarization', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task520_aquamuse_answer_given_in_passage', 'task569_recipe_nlg_text_generation', 'task602_wikitext-103_answer_generation', 'task613_politifact_text_generation', 'task614_glucose_cause_event_detection', 'task619_ohsumed_abstract_title_generation', 'task620_ohsumed_medical_subject_headings_answer_generation', 'task623_ohsumed_yes_no_answer_generation', 'task640_esnli_classification', 'task641_esnli_classification', 'task642_esnli_classification', 'task645_summarization', 'task648_answer_generation', 'task670_ambigqa_question_generation', 'task671_ambigqa_text_generation', 'task677_ollie_sentence_answer_generation', 'task738_perspectrum_classification', 'task743_eurlex_summarization', 'task760_msr_sqa_long_text_generation', 'task769_qed_summarization', 'task827_copa_commonsense_reasoning', 'task828_copa_commonsense_cause_effect', 'task879_schema_guided_dstc8_classification', 'task880_schema_guided_dstc8_classification', 'task890_gcwd_classification', 'task891_gap_coreference_resolution', 'task892_gap_reverse_coreference_resolution', 'task893_gap_fill_the_blank_coreference_resolution', 'task909_dialogre_prevalent_speakers', 'task935_defeasible_nli_atomic_classification', 'task936_defeasible_nli_snli_classification', 'task937_defeasible_nli_social_classification', 'task957_e2e_nlg_text_generation_generate', 'task970_sherliic_causal_relationship']
``` | [
-0.6323238611221313,
-0.7747780680656433,
0.26723673939704895,
0.2903057336807251,
-0.2524639666080475,
0.016817688941955566,
-0.19943775236606598,
-0.36167922616004944,
0.4097456634044647,
0.42330408096313477,
-0.9462295174598694,
-0.7520082592964172,
-0.3615371584892273,
0.58606642484664... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gsarti/iwslt2017_context | gsarti | 2023-05-07T14:09:24Z | 124 | 1 | iwslt-2017 | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:fr",
"language:it",
"language:ja",
"language... | 2023-05-07T14:09:24Z | 2023-05-07T14:03:04.000Z | 2023-05-07T14:03:04 | ---
annotations_creators:
- crowdsourced
language:
- ar
- de
- en
- fr
- it
- ja
- ko
- nl
- ro
- zh
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2017
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2017
dataset_info:
- config_name: iwslt2017-en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-it-en
features:
- name: translation
dtype:
translation:
languages:
- it
- en
splits:
- name: train
num_bytes: 46647925
num_examples: 231619
- name: test
num_bytes: 305246
num_examples: 1566
- name: validation
num_bytes: 200023
num_examples: 929
download_size: 329391132
dataset_size: 47153194
- config_name: iwslt2017-it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-nl-en
features:
- name: translation
dtype:
translation:
languages:
- nl
- en
splits:
- name: train
num_bytes: 42843933
num_examples: 237240
- name: test
num_bytes: 311646
num_examples: 1777
- name: validation
num_bytes: 197814
num_examples: 1003
download_size: 329391132
dataset_size: 43353393
- config_name: iwslt2017-nl-it
features:
- name: translation
dtype:
translation:
languages:
- nl
- it
splits:
- name: train
num_bytes: 43033168
num_examples: 233415
- name: test
num_bytes: 309725
num_examples: 1669
- name: validation
num_bytes: 197774
num_examples: 1001
download_size: 329391132
dataset_size: 43540667
- config_name: iwslt2017-nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ro-en
features:
- name: translation
dtype:
translation:
languages:
- ro
- en
splits:
- name: train
num_bytes: 44129950
num_examples: 220538
- name: test
num_bytes: 316790
num_examples: 1678
- name: validation
num_bytes: 205028
num_examples: 914
download_size: 329391132
dataset_size: 44651768
- config_name: iwslt2017-ro-it
features:
- name: translation
dtype:
translation:
languages:
- ro
- it
splits:
- name: train
num_bytes: 44485169
num_examples: 217551
- name: test
num_bytes: 314974
num_examples: 1643
- name: validation
num_bytes: 204989
num_examples: 914
download_size: 329391132
dataset_size: 45005132
- config_name: iwslt2017-ro-nl
features:
- name: translation
dtype:
translation:
languages:
- ro
- nl
splits:
- name: train
num_bytes: 41338738
num_examples: 206920
- name: test
num_bytes: 320952
num_examples: 1680
- name: validation
num_bytes: 202380
num_examples: 913
download_size: 329391132
dataset_size: 41862070
- config_name: iwslt2017-ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 27748780
dataset_size: 58736561
- config_name: iwslt2017-de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758320
dataset_size: 44427829
- config_name: iwslt2017-en-ar
features:
- name: translation
dtype:
translation:
languages:
- en
- ar
splits:
- name: train
num_bytes: 56481059
num_examples: 231713
- name: test
num_bytes: 2014296
num_examples: 8583
- name: validation
num_bytes: 241206
num_examples: 888
download_size: 29333173
dataset_size: 58736561
- config_name: iwslt2017-en-de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 42608380
num_examples: 206112
- name: test
num_bytes: 1608474
num_examples: 8079
- name: validation
num_bytes: 210975
num_examples: 888
download_size: 16758334
dataset_size: 44427829
- config_name: iwslt2017-en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 27699724
dataset_size: 51248330
- config_name: iwslt2017-en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26983602
dataset_size: 50222118
- config_name: iwslt2017-en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364776
dataset_size: 53767131
- config_name: iwslt2017-en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 27597071
dataset_size: 46079068
- config_name: iwslt2017-fr-en
features:
- name: translation
dtype:
translation:
languages:
- fr
- en
splits:
- name: train
num_bytes: 49273286
num_examples: 232825
- name: test
num_bytes: 1767465
num_examples: 8597
- name: validation
num_bytes: 207579
num_examples: 890
download_size: 26880731
dataset_size: 51248330
- config_name: iwslt2017-ja-en
features:
- name: translation
dtype:
translation:
languages:
- ja
- en
splits:
- name: train
num_bytes: 48204987
num_examples: 223108
- name: test
num_bytes: 1809007
num_examples: 8469
- name: validation
num_bytes: 208124
num_examples: 871
download_size: 26190859
dataset_size: 50222118
- config_name: iwslt2017-ko-en
features:
- name: translation
dtype:
translation:
languages:
- ko
- en
splits:
- name: train
num_bytes: 51678043
num_examples: 230240
- name: test
num_bytes: 1869793
num_examples: 8514
- name: validation
num_bytes: 219295
num_examples: 879
download_size: 19364733
dataset_size: 53767131
- config_name: iwslt2017-zh-en
features:
- name: translation
dtype:
translation:
languages:
- zh
- en
splits:
- name: train
num_bytes: 44271004
num_examples: 231266
- name: test
num_bytes: 1605527
num_examples: 8549
- name: validation
num_bytes: 202537
num_examples: 879
download_size: 26849290
dataset_size: 46079068
---
# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2017/TED-tasks](https://sites.google.com/site/iwsltevaluation2017/TED-tasks)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Overview of the IWSLT 2017 Evaluation Campaign](https://aclanthology.org/2017.iwslt-1.1/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.24 GB
- **Size of the generated dataset:** 1.14 GB
- **Total amount of disk used:** 5.38 GB
*This repository contain a modified version of the loading script used in the official [iwslt2017](https://huggingface.co/datasets/iwslt2017) repository updated to include document and segment information for all available sentence pairs, enabling their usage for document-level and context-aware MT applications. Refer to the original repository for additional information.*
| [
-0.5616062879562378,
-0.27712640166282654,
0.12789469957351685,
0.3320384621620178,
-0.346610426902771,
0.37973153591156006,
-0.06502201408147812,
-0.4272841215133667,
0.2874563932418823,
0.5325952768325806,
-1.1179264783859253,
-0.8380458950996399,
-0.7844113707542419,
-0.0051356931217014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
disham993/Synthetic_Furniture_Dataset | disham993 | 2023-06-28T13:08:17Z | 124 | 2 | null | [
"region:us"
] | 2023-06-28T13:08:17Z | 2023-06-28T11:38:12.000Z | 2023-06-28T11:38:12 | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 190892
num_examples: 1003
download_size: 0
dataset_size: 190892
---
# Dataset Card for "Synthetic_Furniture_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5387447476387024,
-0.531972348690033,
0.19391153752803802,
0.25755834579467773,
-0.14427980780601501,
0.03427042067050934,
0.2064162641763687,
-0.21752551198005676,
0.770683228969574,
0.47745972871780396,
-1.0166652202606201,
-0.74642413854599,
-0.21633383631706238,
-0.19789093732833862... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Atharva07/hc3_finance | Atharva07 | 2023-10-30T14:15:03Z | 124 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-30T14:15:03Z | 2023-10-30T13:52:45.000Z | 2023-10-30T13:52:45 | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: human_answers
dtype: string
- name: chatgpt_answers
sequence: string
- name: source
dtype: string
- name: embeddings
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 12514923
num_examples: 3104
- name: validation
num_bytes: 1655672
num_examples: 414
- name: test
num_bytes: 1696431
num_examples: 415
download_size: 13908983
dataset_size: 15867026
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gianma/eurlexsum_ita_cleaned_8192_232 | gianma | 2023-11-05T11:45:05Z | 124 | 0 | null | [
"region:us"
] | 2023-11-05T11:45:05Z | 2023-11-02T15:22:31.000Z | 2023-11-02T15:22:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: is_camera
dtype: bool
- name: reference
dtype: string
- name: summary
dtype: string
- name: tokenized_len_total
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4119487
num_examples: 228
- name: validation
num_bytes: 231666
num_examples: 13
- name: test
num_bytes: 253451
num_examples: 13
download_size: 0
dataset_size: 4604604
---
# Dataset Card for "eurlexsum_ita_cleaned_8192_232"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3834327459335327,
-0.03914444148540497,
0.10824007540941238,
0.09117204695940018,
-0.2989130914211273,
0.07743332535028458,
0.39274078607559204,
0.0343114472925663,
1.0148998498916626,
0.7859899997711182,
-0.6043107509613037,
-0.6520805358886719,
-0.2806175947189331,
-0.0476584881544113... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mohammed-Altaf/medical-instruction-120k | Mohammed-Altaf | 2023-11-16T15:48:56Z | 124 | 3 | null | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"medical",
"region:us"
] | 2023-11-16T15:48:56Z | 2023-11-16T11:48:28.000Z | 2023-11-16T11:48:28 | ---
license: mit
language:
- en
tags:
- medical
pretty_name: python
size_categories:
- 100K<n<1M
---
# What is the Dataset About?🤷🏼♂️
---
The dataset is useful for training a Generative Language Model for the Medical application and instruction purposes, the dataset consists of various thoughs proposed by the people [**mentioned as the Human** ] and there responses including Medical Terminologies not limited to but including names of the drugs, prescriptions, yogic exercise suggessions, breathing exercise suggessions and few natural home made prescriptions.
# How the Dataset was made?😅
---
I have used all the available opensource datasets and combined them into a single datsource for training, which is completely opensourced and somewhat reliable.
* There is smaller version of this datset here 👉🏼 [Link](https://huggingface.co/datasets/Mohammed-Altaf/medical-instruction-100k)
## Example Training Scripts:
* Qlora Fine Tuning - | [
-0.09224707633256912,
-0.5886741280555725,
0.029624709859490395,
-0.19203726947307587,
-0.2037874013185501,
-0.31000664830207825,
-0.2424692064523697,
-0.06225166469812393,
0.18974149227142334,
0.6678253412246704,
-0.7820249795913696,
-0.9531708359718323,
-0.3411448001861572,
-0.0363512076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
qwedsacf/grade-school-math-instructions | qwedsacf | 2023-02-11T01:59:26Z | 123 | 30 | null | [
"region:us"
] | 2023-02-11T01:59:26Z | 2023-02-11T01:32:53.000Z | 2023-02-11T01:32:53 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 4804916
num_examples: 8792
download_size: 2554896
dataset_size: 4804916
---
# Dataset Card for grade-school-math-instructions
OpenAI's [grade-school-math](https://github.com/openai/grade-school-math) dataset converted into instructions.
## Citation Information
```bibtex
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
``` | [
-0.09087096899747849,
-0.7206839323043823,
0.4463505148887634,
0.23793058097362518,
-0.16765853762626648,
-0.43183448910713196,
-0.26325806975364685,
0.18879002332687378,
0.09273114800453186,
0.21881742775440216,
-0.8063789010047913,
-0.77323979139328,
-0.44010987877845764,
-0.035549711436... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/huatuo_knowledge_graph_qa | FreedomIntelligence | 2023-07-07T08:46:58Z | 123 | 17 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | 2023-07-07T08:46:58Z | 2023-05-06T06:35:38.000Z | 2023-05-06T06:35:38 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 100K<n<1M
---
# Dataset Card for Huatuo_knowledge_graph_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We built this QA dataset based on the medical knowledge map, with a total of 798,444 pieces of data, in which the questions are constructed by means of templates, and the answers are the contents of the entries in the knowledge map.
## Dataset Creation
### Source Data
https://cpubmed.openi.org.cn/graph/wiki
https://github.com/zhihao-chen/QASystemOnMedicalGraph
https://github.com/baiyang2464/chatbot-base-on-Knowledge-Graph
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.2502368986606598,
-0.6133176684379578,
0.4272741973400116,
-0.048850227147340775,
-0.40978288650512695,
-0.11082099378108978,
0.06470910459756851,
-0.23889070749282837,
0.3431724011898041,
0.5033988952636719,
-0.2540864050388336,
-0.9548426866531372,
-0.3681131601333618,
-0.168980196118... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wanng/midjourney-v5-202304-clean | wanng | 2023-05-28T05:56:11Z | 123 | 37 | null | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"language:en",
"license:apache-2.0",
"midjourney",
"region:us"
] | 2023-05-28T05:56:11Z | 2023-05-26T06:58:05.000Z | 2023-05-26T06:58:05 | ---
license: apache-2.0
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- midjourney
---
# midjourney-v5-202304-clean
## 简介 Brief Introduction
非官方的,爬取自midjourney v5的2023年4月的数据,一共1701420条。
Unofficial, crawled from midjourney v5 for April 2023, 1,701,420 pairs in total.
## 数据集信息 Dataset Information
原始项目地址:https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
我做了一些清洗,清理出了两个文件:
- ori_prompts_df.parquet (1,255,812对,midjourney的四格图)

- upscaled_prompts_df.parquet (445,608对,使用了高清指令的图,这意味着这个图更受欢迎。)

Original project address: https://huggingface.co/datasets/tarungupta83/MidJourney_v5_Prompt_dataset
I did some cleaning and cleaned out two files:
- ori_prompts_df.parquet (1,255,812 pairs, midjourney's four-frame diagrams)
- upscaled_prompts_df.parquet (445,608 pairs, graphs that use the Upscale command, which means this one is more popular.)
| [
-0.6139171719551086,
-0.6779617667198181,
0.4318042993545532,
0.24868394434452057,
-0.4965573847293854,
-0.3421090245246887,
0.19979849457740784,
-0.27228450775146484,
0.5194266438484192,
0.5432901978492737,
-0.9050121307373047,
-0.6482633352279663,
-0.5771759152412415,
0.12503013014793396... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aaparajit02/punjabi-asr | aaparajit02 | 2023-07-23T17:11:39Z | 123 | 0 | null | [
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:pa",
"punjabi",
"asr",
"transcription",
"translation",
"arxiv:2208.12666",
"region:us"
] | 2023-07-23T17:11:39Z | 2023-07-23T16:16:07.000Z | 2023-07-23T16:16:07 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 10917088956.322
num_examples: 39238
download_size: 10866820110
dataset_size: 10917088956.322
task_categories:
- automatic-speech-recognition
language:
- pa
tags:
- punjabi
- asr
- transcription
- translation
pretty_name: Punjabi ASR
size_categories:
- 10K<n<100K
---
# Dataset for Punjabi ASR
Shrutilipi is a labelled ASR corpus obtained by mining parallel audio and text pairs at the document scale from All India Radio news bulletins for 12 Indian languages: Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Sanskrit, Tamil, Telugu, Urdu. The corpus has over 6400 hours of data across all languages.
```
@misc{https://doi.org/10.48550/arxiv.2208.12666,
doi = {10.48550/ARXIV.2208.12666},
url = {https://arxiv.org/abs/2208.12666},
author = {Bhogale, Kaushal Santosh and Raman, Abhigyan and Javed, Tahir and Doddapaneni, Sumanth and Kunchukuttan, Anoop and Kumar, Pratyush and Khapra, Mitesh M.},
title = {Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
``` | [
0.028711704537272453,
-0.2556597888469696,
-0.03999420627951622,
0.46325206756591797,
-0.3486008942127228,
0.07037588953971863,
-0.6663274168968201,
-0.2314927726984024,
0.3687243163585663,
0.1353851705789566,
-0.2920933961868286,
-0.5047986507415771,
-0.9087499976158142,
0.300869196653366... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/tedlium-prompted | distil-whisper | 2023-09-18T13:21:11Z | 123 | 0 | null | [
"region:us"
] | 2023-09-18T13:21:11Z | 2023-09-18T12:41:46.000Z | 2023-09-18T12:41:46 | ---
dataset_info:
config_name: release3
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: gender
dtype:
class_label:
names:
'0': unknown
'1': female
'2': male
- name: file
dtype: string
- name: id
dtype: string
- name: whisper_transcript_unprompted
dtype: string
- name: whisper_transcript
dtype: string
splits:
- name: train
num_bytes: 52484152554.125
num_examples: 268263
- name: validation
num_bytes: 184679438.0
num_examples: 507
- name: test
num_bytes: 302513272.625
num_examples: 1155
download_size: 52650349441
dataset_size: 52971345264.75
configs:
- config_name: release3
data_files:
- split: train
path: release3/train-*
- split: validation
path: release3/validation-*
- split: test
path: release3/test-*
---
# Dataset Card for "tedlium-prompted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4103659689426422,
-0.5693304538726807,
0.33432653546333313,
0.14555054903030396,
-0.1952545940876007,
-0.0036830510944128036,
0.07625087350606918,
-0.033269017934799194,
0.9061578512191772,
0.5038015842437744,
-1.0929259061813354,
-0.8100020885467529,
-0.35032761096954346,
-0.0696232691... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/gigaspeech-l-timestamped | distil-whisper | 2023-09-25T10:28:51Z | 123 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] | 2023-09-25T10:28:51Z | 2023-09-22T09:05:06.000Z | 2023-09-22T09:05:06 | ---
license: other
task_categories:
- automatic-speech-recognition
language:
- en
extra_gated_prompt: |-
SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms.
Terms of Access:
The "Researcher" has requested permission to use the GigaSpeech database (the "Database") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
Please also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I hereby confirm that I have requested access via the Google Form provided above: checkbox
I accept the terms of access: checkbox
---
# Distil Whisper: GigaSpeech With Timestamps
This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
| [
-0.2145792543888092,
-0.6991184949874878,
0.17692777514457703,
0.5081067681312561,
-0.2739357054233551,
0.11410746723413467,
-0.05189043655991554,
-0.2912706732749939,
0.5870535969734192,
0.32607975602149963,
-0.8534108996391296,
-0.30532094836235046,
-0.6623873114585876,
-0.02971706911921... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/peoples_speech-clean-timestamped | distil-whisper | 2023-09-25T10:30:12Z | 123 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-25T10:30:12Z | 2023-09-22T09:05:09.000Z | 2023-09-22T09:05:09 | ---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: People's Speech Clean
---
# Distil Whisper: People's Speech Clean With Timestamps
This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
| [
-0.1517052948474884,
-0.5760918259620667,
0.11479295045137405,
0.38647714257240295,
-0.3279121518135071,
0.15525327622890472,
-0.1843409240245819,
-0.30905625224113464,
0.39851266145706177,
0.501780092716217,
-0.7392765879631042,
-0.48730242252349854,
-0.5272939801216125,
0.057869274169206... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nathanReitinger/mlcb | nathanReitinger | 2023-11-04T02:30:20Z | 123 | 0 | null | [
"region:us"
] | 2023-11-04T02:30:20Z | 2023-10-25T01:54:40.000Z | 2023-10-25T01:54:40 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8132250961
num_examples: 76369
- name: test
num_bytes: 897865830
num_examples: 8486
download_size: 2715307703
dataset_size: 9030116791
---
# Dataset Card for "mlcb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7132006287574768,
-0.386863648891449,
0.17672747373580933,
0.4160602390766144,
-0.19631119072437286,
-0.03338804468512535,
0.288119375705719,
-0.19106677174568176,
0.8404473662376404,
0.6223280429840088,
-0.925312340259552,
-0.9053005576133728,
-0.5020596385002136,
-0.298002690076828,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
didsr/msynth | didsr | 2023-11-16T21:48:34Z | 123 | 3 | null | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:10K<n<100K",
"license:cc0-1.0",
"medical",
"arxiv:2310.18494",
"region:us"
] | 2023-11-16T21:48:34Z | 2023-10-26T21:32:23.000Z | 2023-10-26T21:32:23 | ---
license: cc0-1.0
task_categories:
- image-classification
- image-segmentation
tags:
- medical
pretty_name: M-SYNTH
size_categories:
- 10K<n<100K
---
# M-SYNTH
<!-- Provide a quick summary of the dataset. -->
M-SYNTH is a synthetic digital mammography (DM) dataset with four breast fibroglandular density distributions imaged using Monte Carlo x-ray simulations with the publicly available [Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE)](https://github.com/DIDSR/VICTRE) toolkit.
## Dataset Details
The dataset has the following characteristics:
* Breast density: dense, heterogeneously dense, scattered, fatty
* Mass radius (mm): 5.00, 7.00, 9.00
* Mass density: 1.0, 1.06, 1.1 (ratio of radiodensity of the mass to that of fibroglandular tissue)
* Relative dose: 20%, 40%, 60%, 80%, 100% of the clinically recommended dose for each density
<p align="center">
<img src='https://raw.githubusercontent.com/DIDSR/msynth-release/main/images/examples.png' width='700'>
</p>
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [Elena Sizikova](https://esizikova.github.io/), [Niloufar Saharkhiz](https://www.linkedin.com/in/niloufar-saharkhiz/), [Diksha Sharma](https://www.linkedin.com/in/diksha-sharma-6059977/), [Miguel Lago](https://www.linkedin.com/in/milaan/), [Berkman Sahiner](https://www.linkedin.com/in/berkman-sahiner-6aa9a919/), [Jana Gut Delfino](https://www.linkedin.com/in/janadelfino/), [Aldo Badano](https://www.linkedin.com/in/aldobadano/)
- **License:** Creative Commons 1.0 Universal License (CC0)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Code:** [https://github.com/DIDSR/msynth-release](https://github.com/DIDSR/msynth-release)
- **Paper:** [https://arxiv.org/pdf/2310.18494.pdf](https://arxiv.org/pdf/2310.18494.pdf)
- **Demo:** [https://github.com/DIDSR/msynth-release/tree/master/examples](https://github.com/DIDSR/msynth-release/tree/master/examples)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
M-SYNTH is intended to facilitate testing of AI with pre-computed synthetic mammography data.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
M-SYNTH can be used to evaluate the effect of mass size and density, breast density, and dose on AI performance in lesion detection.
M-SYNTH can be used to either train or test pre-trained AI models.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
M-SYNTH cannot be used in lieu of real patient examples to make performance determinations.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
M-SYNTH is organized into a directory structure that indicates the parameters. The folder
```
device_data_VICTREPhantoms_spic_[LESION_DENSITY]/[DOSE]/[BREAST_DENSITY]/2/[LESION_SIZE]/SIM/P2_[LESION_SIZE]_[BREAST_DENSITY].8337609.[PHANTOM_FILE_ID]/[PHANTOM_FILEID]/
```
contains image files imaged with the specified parameters. Note that only examples with odd PHANTOM_FILEID contain lesions, others do not.
```
$ tree data/device_data_VICTREPhantoms_spic_1.0/1.02e10/hetero/2/5.0/SIM/P2_5.0_hetero.8337609.1/1/
data/device_data_VICTREPhantoms_spic_1.0/1.02e10/hetero/2/5.0/SIM/P2_5.0_hetero.8337609.1/1/
├── DICOM_dm
│ └── 000.dcm
├── projection_DM1.loc
├── projection_DM1.mhd
└── projection_DM1.raw
```
Each folder contains mammogram data that can be read from .raw format (.mhd contains supporting data), or DICOM (.dcm) format.
Coordinates of lesions can be found in .loc files. Segmentations are stored in .raw format and can be found in data/segmentation_masks/* .
See [Github](https://github.com/DIDSR/msynth-release/tree/main/code) for examples of how to access the files, and [examples](https://github.com/DIDSR/msynth-release/tree/main/examples) for code to load each type of file.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Simulation-based testing is constrained to the parameter variability represented in the object model and the acquisition system.
There is a risk of misjudging model performance if the simulated examples do not capture the variability in real patients. Please
see the paper for a full discussion of biases, risks, and limitations.
## How to use it
The msynth dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`.
The msynth dataset has three configurations: 1) device_data, 2) segmentation_mask, and 3) metadata
You can load and iterate through the dataset using the configurations with the following lines of code:
```python
from datasets import load_dataset
ds = load_dataset("didsr/msynth", 'device_data') # For device data for all breast density, mass redius, mass density, and relative dose, change configuration to 'segmentation_mask' and 'metadata' to load the segmentation masks and bound information
print(ds_data["device_data"])
# A sample data instance
{'Raw': '~\\.cache\\huggingface\\datasets\\downloads\\extracted\\59384cf05fc44e8c0cb23bb19e1fcd8f0c39720b282109d204a85561fe66bdb1\\SIM\\P2_5.0_fatty.8336179.1\\1\\projection_DM1.raw',
'mhd': '~/.cache/huggingface/datasets/downloads/extracted/59384cf05fc44e8c0cb23bb19e1fcd8f0c39720b282109d204a85561fe66bdb1/SIM/P2_5.0_fatty.8336179.1/1\\projection_DM1.mhd',
'loc': '~/.cache/huggingface/datasets/downloads/extracted/59384cf05fc44e8c0cb23bb19e1fcd8f0c39720b282109d204a85561fe66bdb1/SIM/P2_5.0_fatty.8336179.1/1\\projection_DM1.loc',
'dcm': '~/.cache/huggingface/datasets/downloads/extracted/59384cf05fc44e8c0cb23bb19e1fcd8f0c39720b282109d204a85561fe66bdb1/SIM/P2_5.0_fatty.8336179.1/1\\DICOM_dm\\000.dcm',
'density': 'fatty',
'mass_radius': 5.0}
```
Msynth dataset can also be loaded using custom breast density, mass redius, mass density, and relative dose information
```python
from datasets import load_dataset
# Dataset properties. change to 'all' to include all the values of breast density, mass redius, mass density, and relative dose information
config_kwargs = {
"lesion_density": ["1.0"],
"dose": ["20%"],
"density": ["fatty"],
"size": ["5.0"]
}
# Loading device data
ds_data = load_dataset("didsr/msynth", 'device_data', **config_kwargs)
# Loading segmentation-mask
ds_seg = load_dataset("didsr/msynth", 'segmentation_mask', **config_kwargs)
```
The meta data can also be loaded using the datasets API. An example of using metadata is given in **Demo:** [https://github.com/DIDSR/msynth-release/tree/master/examples](https://github.com/DIDSR/msynth-release/tree/master/examples)
```python
from datasets import load_dataset
# Loading metadata
ds_meta = load_dataset("didsr/msynth", 'metadata')
# A sample data instance
ds_meta['metadata'][0]
# Output
{'fatty': '~\\.cache\\huggingface\\datasets\\downloads\\extracted\\3ea85fc6b3fcc253ac8550b5d1b21db406ca9a59ea125ff8fc63d9b754c88348\\bounds\\bounds_fatty.npy',
'dense': '~\\.cache\\huggingface\\datasets\\downloads\\extracted\\3ea85fc6b3fcc253ac8550b5d1b21db406ca9a59ea125ff8fc63d9b754c88348\\bounds\\bounds_dense.npy',
'hetero': '~\\.cache\\huggingface\\datasets\\downloads\\extracted\\3ea85fc6b3fcc253ac8550b5d1b21db406ca9a59ea125ff8fc63d9b754c88348\\bounds\\bounds_hetero.npy',
'scattered': '~\\.cache\\huggingface\\datasets\\downloads\\extracted\\3ea85fc6b3fcc253ac8550b5d1b21db406ca9a59ea125ff8fc63d9b754c88348\\bounds\\bounds_scattered.npy'}
```
## Citation
```
@article{sizikova2023knowledge,
title={Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses},
author={Sizikova, Elena and Saharkhiz, Niloufar and Sharma, Diksha and Lago, Miguel and Sahiner, Berkman and Delfino, Jana G. and Badano, Aldo},
journal={Advances in Neural Information Processing Systems},
volume={},
pages={},
year={2023}
}
```
## Related Links
1. [Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE)](https://www.fda.gov/medical-devices/science-and-research-medical-devices/victre-silico-breast-imaging-pipeline).
2. [FDA Catalog of Regulatory Science Tools to Help Assess New Medical Devices](https://www.fda.gov/medical-devices/science-and-research-medical-devices/catalog-regulatory-science-tools-help-assess-new-medical-devices).
3. A. Badano, C. G. Graff, A. Badal, D. Sharma, R. Zeng, F. W. Samuelson, S. Glick, K. J. Myers. [Evaluation of Digital Breast Tomosynthesis as Replacement of Full-Field Digital Mammography Using an In Silico Imaging Trial](http://dx.doi.org/10.1001/jamanetworkopen.2018.5474). JAMA Network Open 2018.
4. A. Badano, M. Lago, E. Sizikova, J. G. Delfino, S. Guan, M. A. Anastasio, B. Sahiner. [The stochastic digital human is now enrolling for in silico imaging trials—methods and tools for generating digital cohorts.](http://dx.doi.org/10.1088/2516-1091/ad04c0) Progress in Biomedical Engineering 2023.
5. E. Sizikova, N. Saharkhiz, D. Sharma, M. Lago, B. Sahiner, J. G. Delfino, A. Badano. [Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI](https://github.com/DIDSR/msynth-release). NeurIPS 2023 Workshop on Synthetic Data Generation with Generative AI. | [
-0.3856155276298523,
-0.590556800365448,
0.5629978775978088,
-0.11633150279521942,
-0.41932252049446106,
-0.21571794152259827,
0.2743166983127594,
-0.17324413359165192,
0.5666936635971069,
0.4563908576965332,
-0.8506183624267578,
-0.7895660400390625,
-0.43429824709892273,
0.180201396346092... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arakesh/uavid-15-hq-mixedres | arakesh | 2022-04-12T17:19:47Z | 122 | 1 | null | [
"region:us"
] | 2022-04-12T17:19:47Z | 2022-04-12T17:04:13.000Z | 2022-04-12T17:04:13 | Data source: https://uavid.nl/
| images | semantic maps | instance ids |
| --- | --- | --- |
| available | available | n/a |
```
dataset-size: 6.1G
resolution: mixed (3840x2160, 4096x2060) - because drone cameras are different for different faces.
license: ...
sample-size:
+ train: 200
+ test: 70
``` | [
-0.7013291120529175,
-0.47877153754234314,
0.1105867400765419,
-0.17426224052906036,
-0.6123225688934326,
-0.22527645528316498,
0.014948091469705105,
-0.5919974446296692,
0.3318914771080017,
0.3912420868873596,
-0.6099783778190613,
-0.978712797164917,
-0.6784992218017578,
0.051644969731569... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-400000-450000 | tomekkorbak | 2022-10-03T18:51:21Z | 122 | 0 | null | [
"region:us"
] | 2022-10-03T18:51:21Z | 2022-10-03T18:51:13.000Z | 2022-10-03T18:51:13 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/europeana_newspapers | biglam | 2023-01-06T11:42:17Z | 122 | 2 | null | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"language:de",
"language:fr",
"language:el",
"language:et",
"language:fi",
"language:hr",
... | 2023-01-06T11:42:17Z | 2022-10-04T16:31:37.000Z | 2022-10-04T16:31:37 | ---
annotations_creators:
- no-annotation
language:
- de
- fr
- el
- et
- fi
- hr
- ji
- pl
- ru
- sr
- sv
- uk
language_creators:
- machine-generated
multilinguality:
- multilingual
pretty_name: 'Europeana Newspapers '
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- newspapers
- lam
- OCR
task_categories:
- text-generation
task_ids:
- language-modeling
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vietgpt/opus100_envi | vietgpt | 2023-07-03T17:56:58Z | 122 | 0 | null | [
"task_categories:translation",
"size_categories:1M<n<10M",
"language:en",
"language:vi",
"LM",
"region:us"
] | 2023-07-03T17:56:58Z | 2023-02-22T09:11:25.000Z | 2023-02-22T09:11:25 | ---
dataset_info:
features:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: test
num_bytes: 192744
num_examples: 2000
- name: train
num_bytes: 82614470
num_examples: 1000000
- name: validation
num_bytes: 194721
num_examples: 2000
download_size: 59201490
dataset_size: 83001935
task_categories:
- translation
language:
- en
- vi
tags:
- LM
size_categories:
- 1M<n<10M
---
# Opus100
- Source: https://huggingface.co/datasets/opus100
- Num examples:
- 1,000,000 (train)
- 2,000 (validation)
- 192,744 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/opus100_envi")
```
- Format for Translation task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="<|endofprompt|>",
end_key="<|endoftext|>",
en2vi=True,
):
if en2vi:
if random.random() < 0.5:
instruction = "Translate the following sentences from English into Vietnamese."
else:
instruction = "Dịch các câu sau từ tiếng Anh sang tiếng Việt."
input = sample['en'].strip()
response = sample['vi'].strip()
else:
if random.random() < 0.5:
instruction = "Translate the following sentences from Vietnamese into English."
else:
instruction = "Dịch các câu sau từ tiếng Việt sang tiếng Anh."
input = sample['vi'].strip()
response = sample['en'].strip()
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Dịch các câu sau từ tiếng Anh sang tiếng Việt.
Input:
Toast falls jelly-side down, children hit tables and people get hurt.
<|endofprompt|>
Bánh mì nướng rơi đông lại, trẻ con va vào bàn và con người bị thương.
<|endoftext|>
"""
``` | [
-0.043979257345199585,
-0.7543107867240906,
0.3150809407234192,
0.7180124521255493,
0.01806522160768509,
-0.549257755279541,
-0.4796547293663025,
0.002050066366791725,
0.05512804165482521,
0.5296398401260376,
-0.6730581521987915,
-0.4617209732532501,
-0.5186929106712341,
0.5571518540382385... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fujiki/japanese_alpaca_data | fujiki | 2023-05-19T12:54:13Z | 122 | 7 | null | [
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-05-19T12:54:13Z | 2023-05-18T07:13:15.000Z | 2023-05-18T07:13:15 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 24733874
num_examples: 52002
download_size: 13849623
dataset_size: 24733874
license: cc-by-nc-sa-4.0
language:
- ja
pretty_name: japanese_alpaca
---
# Dataset Card for "japanese_alpaca_data"
- This dataset is based on `masa3141`'s great work on `japanese-alpaca-lora` [[github]](https://github.com/masa3141/japanese-alpaca-lora). Please also refer to this repo.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6534190773963928,
-0.4084831178188324,
0.2316078245639801,
0.38069555163383484,
-0.43640655279159546,
-0.16885893046855927,
0.280422180891037,
-0.5003511309623718,
1.1565210819244385,
0.8293159604072571,
-0.8378998041152954,
-0.8674308657646179,
-0.6356501579284668,
-0.12693479657173157... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
voidful/qrecc | voidful | 2023-05-20T16:36:18Z | 122 | 0 | null | [
"region:us"
] | 2023-05-20T16:36:18Z | 2023-05-20T16:35:30.000Z | 2023-05-20T16:35:30 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/dolly-curated-comparison-falcon-7b-instruct | argilla | 2023-07-13T11:28:57Z | 122 | 4 | null | [
"language:en",
"region:us"
] | 2023-07-13T11:28:57Z | 2023-05-30T19:21:21.000Z | 2023-05-30T19:21:21 | ---
language: en
dataset_info:
features:
- name: prompt
dtype: string
- name: response-1
dtype: string
- name: response-2
dtype: string
- name: category
dtype: string
- name: original_response
dtype: string
- name: external_id
dtype: int64
splits:
- name: train
num_bytes: 10328235
num_examples: 7401
download_size: 6598297
dataset_size: 10328235
---
# Dataset Card for "dolly-curated-comparison-falcon-7b-instruct"
This dataset contains two generated responses using the `falcon-7b-instruct` model and the original, curated, prompt + responses from the Dolly v2 curated dataset. For now only 50% of the original dataset is available but we plan to complete it.
This dataset can be used for training a reward model for RLHF using [Argilla Feedback](https://docs.argilla.io/en/latest/guides/llms/conceptual_guides/conceptual_guides.html)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5164432525634766,
-0.6163637638092041,
-0.016586652025580406,
-0.007374655921012163,
-0.3449321389198303,
-0.14277702569961548,
0.46668100357055664,
-0.40853095054626465,
0.665899395942688,
0.9009774327278137,
-1.0009901523590088,
-0.4872198700904846,
-0.5620502233505249,
0.043826628476... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HANSEN-REPO/HANSEN | HANSEN-REPO | 2023-11-01T18:35:34Z | 122 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-01T18:35:34Z | 2023-06-23T20:11:04.000Z | 2023-06-23T20:11:04 | ---
license: apache-2.0
---
# HANSEN
Human and AI Spoken Text Benchmark for Authorship Analysis.
**We are updating the HANSEN to the following specific format **
The various portions of the
(1) open-source data/existing datasets that we are free to re-distribute (All AA and AV datasets except for FTN and CEO)
(2) open-source data that we may not freely re-distribute but users have to download/scrape themselves (AA and AV datasets for FTN, PAN and CEO due to redistribution issue)
and (3) AI-generated data that we have generated (TT datasets that can be accessible after submitting the form https://forms.gle/WZt7KrxTcmfPXuho9 and accept to the terms
of good usage of the datasets.
## Description
HANSEN comprises 17 human "spoken-text" datasets. It also contains spoken texts generated from three LLMs: ChatGPT, PaLM2, and Vicuna13B.
Spoken text is the text/transcript version of what people say, such as speech, conversation, interviews.
HANSEN can be used for different authorship analysis tasks.
Currently three tasks are defined.
1. AA (Author Attribution): A multi-class classification problem. Given a spoken text T, identifies the speaker from a list of candidate speakers.
2. AV (Author Attribution): A binary classification problem. Given a pair of spoken texts (T1, T2), detects whether they were generated by the same speakers or different speakers.
3. TT (Turing Test/Human vs AI text detection): A binary classification problem. Given a spoken text T, identifies whether the speaker is a human or an LLM.
## AA Task
Currently there are 17 human datasets. Each dataset has two version: small (number of speaker, N=10) and large (number of speaker, N=100 in most cases, for USP, SEC N=30, for TED N=50. for PAN N=56).
So, AA_TED_small will be loading the dataframes of 10 class classification problem in TED dataset.
The dataframes have two columns: author_id (0 to N-1) and text. The list of datasets are as follows.
Dataset | Description
------------- | -------------
TED | TED talks
Spotify | Spotify podcasts
BASE | British Academic Spoken English (BASE) corpus (Nesi and Thompson, 2003)
BNC | British National Corpus
BNC14 | Contemporary version of BNC
MSU | MSU Switchboard Dialogue Act (Telephone conversation)
PAN | Spoken portion of PAN'23 AV datasets
Tennis | Post-match Interview of Tennis players
CEO | CEO and other financial interviews
Voxceleb | Interview of YouTube celebrities
BP | British Parliament Question and Answers
Voxpopuli | European Parliament Events recording
FTN | Face the Nation tv program transcripts
USP | US Life Podcast radio program transcripts
SEC | Security Exchange Commission speeches
Debate | Debates held as part of Intelligence Squared Debates
Court | U.S. Supreme Court oral arguments transcripts
For the CEO and FTN datasets, they do not contain the original text due to the redistribution issue. We have added url and line number (in the text) for each sample in these datasets.
(The script to donwload the original text will be provided soon)
## AV Task
The dataframes have three columns: label (0 if different speaker, 1 if same speaker), text1, and text2. Dataset descriptsions are same as AA task.
## TT Task
Currently HANSEN has three LLMs in five categories (from human dataset settings: TED, Spotify, SEC, CEO, Tennis) spoken texts.
LLM | Description
------------- | -------------
ChatGPT | gpt-3.5-turbo
PALM | PaLM2 (chat-bison@001)
Vicuna13B | Vicuna 13B version finetuned on Llama 13B
So, TT_ChatGPT_TED will be loading the dataframes from human (0) vs ChatGPT (1) dataset in TED category.
The dataframes have two columns: label (0 for human, 1 for AI) and text.
To access the HANSEN-TT dataset, please fill up the form and agree to the terms & conditions.
https://forms.gle/WZt7KrxTcmfPXuho9
If you use the HANSEN dataset in your research please use the following citation:
@article{tripto2023hansen,
title={HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis},
author={Tripto, Nafis Irtiza and Uchendu, Adaku and Le, Thai and Setzu, Mattia and Giannotti, Fosca and Lee, Dongwon},
journal={arXiv preprint arXiv:2310.16746},
year={2023}
} | [
-0.24877561628818512,
-0.6922613978385925,
0.3713853657245636,
0.24956156313419342,
0.014667591080069542,
-0.006609851960092783,
-0.34954777359962463,
-0.4031910002231598,
0.1438039094209671,
0.6974457502365112,
-0.3108082115650177,
-0.6776968836784363,
-0.5229764580726624,
0.6041164994239... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/common_voice_13_0-timestamped | distil-whisper | 2023-09-25T10:30:12Z | 122 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-09-25T10:30:12Z | 2023-09-22T09:05:04.000Z | 2023-09-22T09:05:04 | ---
license: cc0-1.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: Common Voice 13
---
# Distil Whisper: Common Voice 13 With Timestamps
This is a variant of the [Common Voice 13](https://huggingface.co/datasets/mozilla_foundation/common_voice_13) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/mozilla_foundation/common_voice_13).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
| [
-0.2475033402442932,
-0.5975220799446106,
0.13424460589885712,
0.6078150868415833,
-0.22836792469024658,
0.08634790778160095,
-0.1445561945438385,
-0.3180568814277649,
0.4255504012107849,
0.33154335618019104,
-1.0000277757644653,
-0.4013771414756775,
-0.5739644765853882,
0.1433746218681335... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hasangoni/Electron_microscopy_dataset | hasangoni | 2023-09-25T07:57:56Z | 122 | 0 | null | [
"task_categories:image-segmentation",
"size_categories:10K<n<100K",
"language:en",
"microscopy",
"EPFL",
"image segmentation",
"region:us"
] | 2023-09-25T07:57:56Z | 2023-09-22T16:54:18.000Z | 2023-09-22T16:54:18 | ---
task_categories:
- image-segmentation
language:
- en
tags:
- microscopy
- EPFL
- image segmentation
pretty_name: electron microscopy patch image
size_categories:
- 10K<n<100K
---
The dataset:
- Is a patch from the existing dataset available at https://www.epfl.ch/labs/cvlab/data/data-em/.
- Contains patches of size (256, 256).
- Removes any patches with empty masks to ensure quality.
- Has the same license applied as the original dataset.
- Please refer to the license for information on allowed usage.
- If you have any questions or concerns about the dataset, please do not hesitate to contact me. | [
-0.7270281314849854,
-0.6396762728691101,
0.31109070777893066,
0.3772042393684387,
-0.2763480544090271,
-0.015179355628788471,
0.025585118681192398,
-0.3662191331386566,
0.4275086522102356,
1.4814729690551758,
-0.6817494034767151,
-0.48768165707588196,
-0.16639621555805206,
0.1304929107427... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tmabraham/pubmed-enrico-tokenized | tmabraham | 2023-10-27T11:11:24Z | 122 | 0 | null | [
"region:us"
] | 2023-10-27T11:11:24Z | 2023-10-27T09:48:34.000Z | 2023-10-27T09:48:34 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patent/AIPD_nlp_sbert_dataset | patent | 2023-11-03T23:57:35Z | 122 | 0 | null | [
"region:us"
] | 2023-11-03T23:57:35Z | 2023-11-03T09:25:48.000Z | 2023-11-03T09:25:48 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: float64
splits:
- name: train
num_bytes: 1130851138.7014475
num_examples: 453043
- name: test
num_bytes: 62827420.71087167
num_examples: 25170
- name: valid
num_bytes: 62824924.58768093
num_examples: 25169
download_size: 476485691
dataset_size: 1256503484.0
---
# Dataset Card for "AIPD_nlp_sbert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4411761462688446,
-0.10303622484207153,
-0.09436585009098053,
0.23718777298927307,
-0.22660663723945618,
-0.053274448961019516,
0.07908528298139572,
-0.18238864839076996,
0.7060163021087646,
0.5902727246284485,
-0.6613161563873291,
-0.5941135287284851,
-0.7288644909858704,
0.01256908103... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autumnjohnson/ceti_audio | autumnjohnson | 2023-11-14T21:46:21Z | 122 | 0 | null | [
"size_categories:1K<n<10K",
"region:us"
] | 2023-11-14T21:46:21Z | 2023-11-06T10:15:17.000Z | 2023-11-06T10:15:17 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: coda_type
dtype: string
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 329883509.875
num_examples: 3529
download_size: 162683744
dataset_size: 329883509.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: Project CETI (Cetacean Translation Initiative) audio
size_categories:
- 1K<n<10K
---
# Dataset Card for "ceti_audio"
## Table of Contents
- [Dataset Card for "ceti\_audio"](#dataset-card-for-ceti_audio)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@autumnjohnson](https://github.com/<github-username>) for adding this dataset. | [
-0.5987246632575989,
-0.5699872374534607,
0.23284536600112915,
0.30699849128723145,
-0.1421949863433838,
0.13139213621616364,
-0.593692421913147,
-0.45797112584114075,
0.642154335975647,
0.5962461233139038,
-0.9011470079421997,
-1.2156331539154053,
-0.6960424780845642,
0.12562131881713867,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marksverdhei/clickbait_title_classification | marksverdhei | 2022-03-29T21:25:01Z | 121 | 3 | null | [
"license:mit",
"arxiv:1610.09786",
"region:us"
] | 2022-03-29T21:25:01Z | 2022-03-29T21:02:09.000Z | 2022-03-29T21:02:09 | ---
license: mit
---
Dataset introduced in [Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media](https://arxiv.org/abs/1610.09786)
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
```
@inproceedings{chakraborty2016stop,
title={Stop Clickbait: Detecting and preventing clickbaits in online news media},
author={Chakraborty, Abhijnan and Paranjape, Bhargavi and Kakarla, Sourya and Ganguly, Niloy},
booktitle={Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on},
pages={9--16},
year={2016},
organization={IEEE}
}
```
| [
-0.17979444563388824,
-0.740436315536499,
-0.032039038836956024,
0.2788282334804535,
-0.3397541344165802,
0.008224602788686752,
-0.20414209365844727,
-0.28169575333595276,
0.3729360103607178,
0.5233309268951416,
-0.3407629430294037,
-0.6115060448646545,
-0.6354684829711914,
0.2799032628536... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/nordic_langid | strombergnlp | 2022-10-25T21:42:02Z | 121 | 3 | nordic-langid | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:da",
"language:nn",
"language:nb",
"language:fo",
"language:is",
"language:sv",
"license:cc-by-sa... | 2022-10-25T21:42:02Z | 2022-05-10T17:27:03.000Z | 2022-05-10T17:27:03 | ---
annotations_creators:
- found
language_creators:
- found
language:
- da
- nn
- nb
- fo
- is
- sv
license:
- cc-by-sa-3.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: nordic-langid
pretty_name: Nordic Language ID for Distinguishing between Similar Languages
tags:
- language-identification
---
# Dataset Card for nordic_langid
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
- **Repository:** [https://github.com/StrombergNLP/NordicDSL](https://github.com/StrombergNLP/NordicDSL)
- **Paper:** [https://aclanthology.org/2021.vardial-1.8/](https://aclanthology.org/2021.vardial-1.8/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [René Haas](mailto:renha@itu.dk)
### Dataset Summary
Automatic language identification is a challenging problem. Discriminating
between closely related languages is especially difficult. This paper presents
a machine learning approach for automatic language identification for the
Nordic languages, which often suffer miscategorisation by existing
state-of-the-art tools. Concretely we will focus on discrimination between six
Nordic language: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokmål),
Faroese and Icelandic.
This is the data for the tasks. Two variants are provided: 10K and 50K, with
holding 10,000 and 50,000 examples for each language respectively.
For more info, see the paper: [Discriminating Between Similar Nordic Languages](https://aclanthology.org/2021.vardial-1.8/).
### Supported Tasks and Leaderboards
*
### Languages
This dataset is in six similar Nordic language:
- Danish, `da`
- Faroese, `fo`
- Icelandic, `is`
- Norwegian Bokmål, `nb`
- Norwegian Nynorsk, `nn`
- Swedish, `sv`
## Dataset Structure
The dataset has two parts, one with 10K samples per language and another with 50K per language.
The original splits and data allocation used in the paper is presented here.
### Data Instances
[Needs More Information]
### Data Fields
- `id`: the sentence's unique identifier, `string`
- `sentence`: the test to be classifier, a `string`
- `language`: the class, one of `da`, `fo`, `is`, `nb`, `no`, `sv`.
### Data Splits
Train and Test splits are provided, divided using the code provided with the paper.
## Dataset Creation
### Curation Rationale
Data is taken from Wikipedia and Tatoeba from each of these six languages.
### Source Data
#### Initial Data Collection and Normalization
**Data collection** Data was scraped from Wikipedia. We downloaded summaries for randomly chosen Wikipedia
articles in each of the languages, saved as raw text
to six .txt files of about 10MB each.
The 50K section is extended with Tatoeba data, which provides a different register to Wikipedia text, and then topped up with more Wikipedia data.
**Extracting Sentences** The first pass in sentence
tokenisation is splitting by line breaks. We then extract shorter sentences with the sentence tokenizer
(sent_tokenize) function from NLTK (Loper
and Bird, 2002). This does a better job than just
splitting by ’.’ due to the fact that abbreviations,
which can appear in a legitimate sentence, typically
include a period symbol.
**Cleaning characters** The initial data set has
many characters that do not belong to the alphabets of the languages we work with. Often the
Wikipedia pages for people or places contain names
in foreign languages. For example a summary
might contain Chinese or Russian characters which
are not strong signals for the purpose of discriminating between the target languages.
Further, it can be that some characters in the
target languages are mis-encoded. These misencodings are also not likely to be intrinsically
strong or stable signals.
To simplify feature extraction, and to reduce the
size of the vocabulary, the raw data is converted
to lowercase and stripped of all characters which
are not part of the standard alphabet of the six
languages using a character whitelist.
#### Who are the source language producers?
The source language is from Wikipedia contributors and Tatoeba contributors.
### Annotations
#### Annotation process
The annotations were found.
#### Who are the annotators?
The annotations were found. They are determined by which language section a contributor posts their content to.
### Personal and Sensitive Information
The data hasn't been checked for PII, and is already all public. Tatoeba is is based on translations of synthetic conversational turns and is unlikely to bear personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to help correctly identify content in the languages of six minority languages. Existing systems often confuse these, especially Bokmål and Danish or Icelandic and Faroese. However, some dialects are missed (for example Bornholmsk) and the closed nature of the classification task thus excludes speakers of these languages without recognising their existence.
### Discussion of Biases
The text comes from only two genres, so might not transfer well to other domains.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data here is licensed CC-BY-SA 3.0. If you use this data, you MUST state its origin.
### Citation Information
````
@inproceedings{haas-derczynski-2021-discriminating,
title = "Discriminating Between Similar Nordic Languages",
author = "Haas, Ren{\'e} and
Derczynski, Leon",
booktitle = "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.vardial-1.8",
pages = "67--75",
}
```
| [
-0.5665562152862549,
-0.5931638479232788,
-0.0005648594815284014,
0.18292900919914246,
-0.49973610043525696,
0.12040815502405167,
-0.48072969913482666,
-0.590411901473999,
0.43102553486824036,
0.40292587876319885,
-0.41393959522247314,
-0.8344307541847229,
-0.5116501450538635,
0.5547748208... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ekinakyurek/ftrace | ekinakyurek | 2022-10-23T05:56:05Z | 121 | 3 | null | [
"task_ids:masked-language-modeling",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:TRex",
"source_datasets:Lama",
"language:en",
"license:cc-by-sa-4.0",
"license:cc-by-nc-4.0",
"arxiv:2205.11482",
"region:us"
] | 2022-10-23T05:56:05Z | 2022-05-23T04:33:24.000Z | 2022-05-23T04:33:24 | ---
language:
- en
license:
- cc-by-sa-4.0
- cc-by-nc-4.0
multilinguality:
- monolingual
pretty_name: FTRACE
size_categories:
- 1M<n<10M
source_datasets:
- TRex
- Lama
task_categories:
- influence-attribution
- information-retrieval
- question-answering-retrieval
task_ids:
- influence-attribution
- masked-language-modeling
---
# Dataset Card for "FTRACE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/ekinakyurek/ftrace
- **Repository:** https://github.com/ekinakyurek/influence
- **Paper:** https://arxiv.org/pdf/2205.11482.pdf
- **Point of Contact:** [Ekin Akyürek](mailto:akyurek@mit.edu)
- **Size of downloaded dataset files:** 113.7 MB
- **Size of the generated dataset:** 1006.6 MB
- **Total amount of disk used:** 1120.3 MB
### Dataset Summary
[PAPER]
FTRACE is a zero-shot infromation retrieval benchmark deviced for tracing a language model’s predictions back to training examples. In the accompanying paper, we evaluate commonly studied influence methods, including gradient-based (TracIn) and embedding-based approaches. The dataset contains two parts. First, factual queries for that we trace the knowledge are extracted from existing LAMA queries (Petroni et al., 2019). Second, Wikidata sentences are extracted from TREx corpus (Elsahar et al., 2018). We annotate the extracted sentences with their stated facts, and these facts can be mathed with the facts in query set. In both parts, we provide (input, target) pairs as a masked language modeling task -- see examples in the below. However, one can use the same data in other formalities for example auto-regressive completion via a processing of `input_pretokenized` and `targets_pretokenized` field.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### Abstracts
- **Size of downloaded dataset files:** 112 MB
- **Size of the generated dataset:** 884 MB
- **Total amount of disk used:** 996 MB
An example of 'abstract' looks as follows.
```
{"inputs_pretokenized": "The name Austroasiatic comes from the Latin words for \"south\" and \"Asia\", hence \"<extra_id_0>\".",
"targets_pretokenized": "<extra_id_0> South Asia",
"page_uri": "Q33199",
"masked_uri": "Q771405",
"masked_type": "subject",
"example_uris": "Q33199-1-Q48-Q771405-1",
"facts": "P361,Q48,Q771405;P30,Q48,Q771405",
"id": 8}
```
#### Queries
- **Size of downloaded dataset files:** 1.7 MB
- **Size of the generated dataset:** 8.9 MB
- **Total amount of disk used:** 10.6 MB
An example of 'query' looks as follows.
```
{"inputs_pretokenized": "Paul Ehrlich used to work in <extra_id_0> .",
"targets_pretokenized": "<extra_id_0> Frankfurt",
"uuid": "5b063008-a8ba-4064-9f59-e70102bb8c50",
"obj_uri": "Q1794",
"sub_uri": "Q57089",
"predicate_id": "P937",
"obj_surface": "Frankfurt",
"sub_surface": "Paul Ehrlich"}
```
### Data Fields
The data fields are the same among all splits.
#### Abstracts
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `masked_uri`: a `string` feature.
- `masked_type`: a `string` feature.
- `facts`: a `string` feature.
- `id`: a `string` feature.
- `example_uris`: a `string` feature.
- `page_uri`: a `string` feature.
#### Queries
- `inputs_pretokenized`: a `string` feature.
- `targets_pretokenized`: a `string` feature.
- `obj_surface`: a `string` feature.
- `sub_surface`: a `string` feature.
- `obj_uri`: a `string` feature.
- `sub_uri`: a `string` feature.
- `predicate_id`: a `string` feature.
- `uuid`: a `string` feature.
### Data Splits
| name | train |
|-----------|------:|
|Abstracts |1560453|
|Queries |31479 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
LAMA: https://github.com/facebookresearch/LAMA
TRex: https://hadyelsahar.github.io/t-rex/
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The parts of this dataset are available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) and [The Creative Commons Attribution-Noncommercial 4.0 International License](https://github.com/facebookresearch/LAMA/blob/master/LICENSE)
### Citation Information
The main paper should be cited as follow:
```
@misc{https://doi.org/10.48550/arxiv.2205.11482,
doi = {10.48550/ARXIV.2205.11482},
url = {https://arxiv.org/abs/2205.11482},
author = {Akyürek, Ekin and Bolukbasi, Tolga and Liu, Frederick and Xiong, Binbin and Tenney, Ian and Andreas, Jacob and Guu, Kelvin},
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Tracing Knowledge in Language Models Back to the Training Data},
publisher = {arXiv},
year = {2022},
}
```
Please also cite Petroni et al., 2019 for the query set, and Elsahar et al., 2018 for the abstract set.
```
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
```
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
### Contributions | [
-0.5678192377090454,
-0.677052915096283,
0.22263988852500916,
0.1880352795124054,
-0.22609032690525055,
0.018141940236091614,
-0.41577085852622986,
-0.4617190361022949,
0.5918943881988525,
0.42259466648101807,
-0.7470227479934692,
-0.9405543804168701,
-0.5197178721427917,
0.038992855697870... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ajders/machine_translated_cnn_dailymail_da_small | ajders | 2022-08-26T13:01:36Z | 121 | 0 | null | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"language:da",
"license:apache-2.0",
"region:us"
] | 2022-08-26T13:01:36Z | 2022-05-24T11:51:34.000Z | 2022-05-24T11:51:34 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- da
license:
- apache-2.0
multilinguality:
- translation
pretty_name: machine_translated_cnn_dailymail_da_small
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- summarization
task_ids:
- news-articles-summarization
---
# Dataset Card for machine_translated_cnn_dailymail_da_small
### Dataset Summary
This dataset is a machine translated subset of the [CNN Dailymail Dataset](https://huggingface.co/datasets/ccdv/cnn_dailymail) into Danish. The dataset is translated using the [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-en-da)-model. The dataset consists of 2872 articles with summaries with intended usage for Danish text summarisation.
## Dataset Structure
Machine translated articles (`article`) with corresponding summaries (`highlights`).
```
{
'article': Value(dtype='string', id=None),
'highlights': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None)
}
```
### Licensing Information
The dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). | [
-0.45693132281303406,
-0.5592278838157654,
0.0819663256406784,
0.3266119360923767,
-0.9339714050292969,
-0.21880722045898438,
-0.42319488525390625,
-0.32389044761657715,
0.21180514991283417,
0.6986241936683655,
-0.5679524540901184,
-1.0481724739074707,
-0.7665287256240845,
0.57749301195144... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-500000-550000 | tomekkorbak | 2022-10-04T17:42:07Z | 121 | 0 | null | [
"region:us"
] | 2022-10-04T17:42:07Z | 2022-10-04T17:42:00.000Z | 2022-10-04T17:42:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-550000-600000 | tomekkorbak | 2022-10-04T17:46:16Z | 121 | 0 | null | [
"region:us"
] | 2022-10-04T17:46:16Z | 2022-10-04T17:46:07.000Z | 2022-10-04T17:46:07 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-700000-750000 | tomekkorbak | 2022-10-04T17:50:07Z | 121 | 0 | null | [
"region:us"
] | 2022-10-04T17:50:07Z | 2022-10-04T17:49:59.000Z | 2022-10-04T17:49:59 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/detoxify-pile-chunk3-600000-650000 | tomekkorbak | 2022-10-04T17:51:35Z | 121 | 0 | null | [
"region:us"
] | 2022-10-04T17:51:35Z | 2022-10-04T17:51:26.000Z | 2022-10-04T17:51:26 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/enwiki_100k | teven | 2023-04-03T17:16:55Z | 121 | 1 | null | [
"region:us"
] | 2023-04-03T17:16:55Z | 2023-04-03T17:13:51.000Z | 2023-04-03T17:13:51 | ---
dataset_info:
features:
- name: metadata
dtype: string
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2570893740
num_examples: 1000000
download_size: 1550572660
dataset_size: 2570893740
---
# Dataset Card for "enwiki_100k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7833015322685242,
-0.2942749559879303,
-0.034709565341472626,
0.32606595754623413,
-0.16465267539024353,
-0.23625116050243378,
0.017191773280501366,
-0.16414469480514526,
1.0407569408416748,
0.5877873301506042,
-0.9041978120803833,
-0.6037531495094299,
-0.527233898639679,
0.145719230175... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FreedomIntelligence/huatuo_encyclopedia_qa | FreedomIntelligence | 2023-05-17T03:20:55Z | 121 | 25 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | 2023-05-17T03:20:55Z | 2023-05-10T08:30:14.000Z | 2023-05-10T08:30:14 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 100K<n<1M
---
# Dataset Card for Huatuo_encyclopedia_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset has a total of 364,420 pieces of medical QA data, some of which have multiple questions in different ways. We extract medical QA pairs from plain texts (e.g., medical encyclopedias and medical articles). We collected 8,699 encyclopedia entries for diseases and 2,736 encyclopedia entries for medicines on Chinese Wikipedia. Moreover, we crawled 226,432 high-quality medical articles from the Qianwen Health website.
## Dataset Creation
### Source Data
https://zh.wikipedia.org/wiki/
https://51zyzy.com/
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.2821882367134094,
-0.5633546113967896,
0.3657184839248657,
-0.007586943916976452,
-0.5163557529449463,
-0.3615487515926361,
0.020574310794472694,
-0.3619908392429352,
0.40847456455230713,
0.4095506966114044,
-0.2871935963630676,
-0.7870118618011475,
-0.19166824221611023,
0.3444958031177... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HausaNLP/AfriSenti-Twitter | HausaNLP | 2023-09-03T10:39:19Z | 121 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categor... | 2023-09-03T10:39:19Z | 2023-06-16T08:49:02.000Z | 2023-06-16T08:49:02 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-classification
- sentiment-scoring
- semantic-similarity-classification
- semantic-similarity-scoring
tags:
- sentiment analysis, Twitter, tweets
- sentiment
multilinguality:
- monolingual
- multilingual
size_categories:
- 100K<n<1M
language:
- amh
- ary
- arq
- hau
- ibo
- kin
- por
- pcm
- oro
- swa
- tir
- twi
- tso
- yor
pretty_name: AfriSenti
---
<p align="center">
<img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500">
--------------------------------------------------------------------------------
## Dataset Description
- **Homepage:** https://github.com/afrisenti-semeval/afrisent-semeval-2023
- **Repository:** [GitHub](https://github.com/afrisenti-semeval/afrisent-semeval-2023)
- **Paper:** [AfriSenti: AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages](https://arxiv.org/pdf/2302.08956.pdf)
- **Paper:** [NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis](https://arxiv.org/pdf/2201.08277.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** [Shamsuddeen Muhammad](shamsuddeen2004@gmail.com)
### Dataset Summary
AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yoruba).
The datasets are used in the first Afrocentric SemEval shared task, SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval). AfriSenti allows the research community to build sentiment analysis systems for various African languages and enables the study of sentiment and contemporary language use in African languages.
### Supported Tasks and Leaderboards
The AfriSenti can be used for a wide range of sentiment analysis tasks in African languages, such as sentiment classification, sentiment intensity analysis, and emotion detection. This dataset is suitable for training and evaluating machine learning models for various NLP tasks related to sentiment analysis in African languages.
[SemEval 2023 Task 12 : Sentiment Analysis for African Languages](https://codalab.lisn.upsaclay.fr/competitions/7320)
### Languages
14 African languages (Amharic (amh), Algerian Arabic (ary), Hausa(hau), Igbo(ibo), Kinyarwanda(kin), Moroccan Arabic/Darija(arq), Mozambican Portuguese(por), Nigerian Pidgin (pcm), Oromo (oro), Swahili(swa), Tigrinya(tir), Twi(twi), Xitsonga(tso), and Yoruba(yor)).
## Dataset Structure
### Data Instances
For each instance, there is a string for the tweet and a string for the label. See the AfriSenti [dataset viewer](https://huggingface.co/datasets/HausaNLP/AfriSenti-Twitter/viewer/amh/train) to explore more examples.
```
{
"tweet": "string",
"label": "string"
}
```
### Data Fields
The data fields are:
```
tweet: a string feature.
label: a classification label, with possible values including positive, negative and neutral.
```
### Data Splits
The AfriSenti dataset has 3 splits: train, validation, and test. Below are the statistics for Version 1.0.0 of the dataset.
| | ama | arq | hau | ibo | ary | orm | pcm | pt-MZ | kin | swa | tir | tso | twi | yo |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| train | 5,982 | 1,652 | 14,173 | 10,193 | 5,584| - | 5,122 | 3,064 | 3,303 | 1,811 | - | 805 | 3,482| 8,523 |
| dev | 1,498 | 415 | 2,678 | 1,842 | 1,216 | 397 | 1,282 | 768 | 828 | 454 | 399 | 204 | 389 | 2,091 |
| test | 2,000 | 959 | 5,304 | 3,683 | 2,962 | 2,097 | 4,155 | 3,663 | 1,027 | 749 | 2,001 | 255 | 950 | 4,516 |
| total | 9,483 | 3,062 | 22,155 | 15,718 | 9,762 | 2,494 | 10,559 | 7,495 | 5,158 | 3,014 | 2,400 | 1,264 | 4,821 | 15,130 |
### How to use it
```python
from datasets import load_dataset
# you can load specific languages (e.g., Amharic). This download train, validation and test sets.
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh")
# train set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "train")
# test set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "test")
# validation set only
ds = load_dataset("HausaNLP/AfriSenti-Twitter", "amh", split = "validation")
```
## Dataset Creation
### Curation Rationale
AfriSenti Version 1.0.0 aimed to be used in the first Afrocentric SemEval shared task **[SemEval 2023 Task 12: Sentiment analysis for African languages (AfriSenti-SemEval)](https://afrisenti-semeval.github.io)**.
### Source Data
Twitter
### Personal and Sensitive Information
We anonymized the tweets by replacing all *@mentions* by *@user* and removed all URLs.
## Considerations for Using the Data
### Social Impact of Dataset
The Afrisenti dataset has the potential to improve sentiment analysis for African languages, which is essential for understanding and analyzing the diverse perspectives of people in the African continent. This dataset can enable researchers and developers to create sentiment analysis models that are specific to African languages, which can be used to gain insights into the social, cultural, and political views of people in African countries. Furthermore, this dataset can help address the issue of underrepresentation of African languages in natural language processing, paving the way for more equitable and inclusive AI technologies.
## Additional Information
### Dataset Curators
AfriSenti is an extension of NaijaSenti, a dataset consisting of four Nigerian languages: Hausa, Yoruba, Igbo, and Nigerian-Pidgin. This dataset has been expanded to include other 10 African languages, and was curated with the help of the following:
| Language | Dataset Curators |
|---|---|
| Algerian Arabic (arq) | Nedjma Ousidhoum, Meriem Beloucif |
| Amharic (ama) | Abinew Ali Ayele, Seid Muhie Yimam |
| Hausa (hau) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Igbo (ibo) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Kinyarwanda (kin)| Samuel Rutunda |
| Moroccan Arabic/Darija (ary) | Oumaima Hourrane |
| Mozambique Portuguese (pt-MZ) | Felermino Dário Mário António Ali |
| Nigerian Pidgin (pcm) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
| Oromo (orm) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Swahili (swa) | Davis Davis |
| Tigrinya (tir) | Abinew Ali Ayele, Seid Muhie Yimam, Hagos Tesfahun Gebremichael, Sisay Adugna Chala, Hailu Beshada Balcha, Wendimu Baye Messell, Tadesse Belay |
| Twi (twi) | Salomey Osei, Bernard Opoku, Steven Arthur |
| Xithonga (tso) | Felermino Dário Mário António Ali |
| Yoruba (yor) | Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said, Bello Shehu Bello |
### Licensing Information
This AfriSenti is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@inproceedings{Muhammad2023AfriSentiAT,
title={AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages},
author={Shamsuddeen Hassan Muhammad and Idris Abdulmumin and Abinew Ali Ayele and Nedjma Ousidhoum and David Ifeoluwa Adelani and Seid Muhie Yimam and Ibrahim Sa'id Ahmad and Meriem Beloucif and Saif Mohammad and Sebastian Ruder and Oumaima Hourrane and Pavel Brazdil and Felermino D'ario M'ario Ant'onio Ali and Davis Davis and Salomey Osei and Bello Shehu Bello and Falalu Ibrahim and Tajuddeen Gwadabe and Samuel Rutunda and Tadesse Belay and Wendimu Baye Messelle and Hailu Beshada Balcha and Sisay Adugna Chala and Hagos Tesfahun Gebremichael and Bernard Opoku and Steven Arthur},
year={2023}
}
```
```
@article{muhammad2023semeval,
title={SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)},
author={Muhammad, Shamsuddeen Hassan and Abdulmumin, Idris and Yimam, Seid Muhie and Adelani, David Ifeoluwa and Ahmad, Ibrahim Sa'id and Ousidhoum, Nedjma and Ayele, Abinew and Mohammad, Saif M and Beloucif, Meriem},
journal={arXiv preprint arXiv:2304.06845},
year={2023}
}
``` | [
-0.7451377511024475,
-0.4214719235897064,
-0.10431916266679764,
0.6208390593528748,
-0.2647916078567505,
-0.07293178141117096,
-0.3328765034675598,
-0.47611382603645325,
0.8069577813148499,
0.18844157457351685,
-0.5702060461044312,
-0.7414222359657288,
-0.7584500312805176,
0.27583417296409... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cryptom/ceval-exam | cryptom | 2023-06-24T00:40:14Z | 121 | 0 | null | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-sa-4.0",
"arxiv:2305.08322",
"region:us"
] | 2023-06-24T00:40:14Z | 2023-06-23T18:40:37.000Z | 2023-06-23T18:40:37 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- multiple-choice
- question-answering
language:
- zh
pretty_name: C-Eval
size_categories:
- 10K<n<100K
---
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/SJTU-LIT/ceval/tree/main#how-to-submit)
### Load the data
```python
from datasets import load_dataset
dataset=load_dataset(r"ceval/ceval-exam",name="computer_network")
print(dataset['val'][0])
# {'id': 0, 'question': '使用位填充方法,以01111110为位首flag,数据为011011111111111111110010,求问传送时要添加几个0____', 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'answer': 'C', 'explanation': ''}
```
More details on loading and using the data are at our [github page](https://github.com/SJTU-LIT/ceval#data).
Please cite our paper if you use our dataset.
```
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
```
| [
-0.4442492127418518,
-1.2121330499649048,
0.3066120445728302,
0.2638392746448517,
0.1581091582775116,
0.1537633240222931,
-0.3672729730606079,
-0.3360186517238617,
-0.12425547093153,
0.39791929721832275,
-0.3320290446281433,
-0.4817109704017639,
-0.0914146825671196,
0.0624251663684845,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FredZhang7/all-scam-spam | FredZhang7 | 2023-07-18T17:16:16Z | 121 | 4 | null | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"size_categories:10K<n<100K",
"language:no",
"language:es",
"language:so",
"language:ca",
"language:af",
"language:it",
"language:nl",
"language:hi",
"language:cy",
"language:ar",
"language:sv",
"language:... | 2023-07-18T17:16:16Z | 2023-07-04T22:07:15.000Z | 2023-07-04T22:07:15 | ---
license: apache-2.0
language:
- no
- es
- so
- ca
- af
- it
- nl
- hi
- cy
- ar
- sv
- cs
- pl
- de
- lt
- sq
- uk
- tl
- sl
- hr
- en
- fi
- vi
- id
- da
- ko
- bg
- mr
- ja
- bn
- ro
- pt
- fr
- hu
- tr
- zh
- mk
- ur
- sk
- ne
- et
- sw
- ru
- multilingual
task_categories:
- text-classification
- zero-shot-classification
tags:
- nlp
- moderation
size_categories:
- 10K<n<100K
---
This is a large corpus of 42,619 preprocessed text messages and emails sent by humans in 43 languages. `is_spam=1` means spam and `is_spam=0` means ham.
1040 rows of balanced data, consisting of casual conversations and scam emails in ≈10 languages, were manually collected and annotated by me, with some help from ChatGPT.
<br>
### Some preprcoessing algorithms
- [spam_assassin.js](./spam_assassin.js), followed by [spam_assassin.py](./spam_assassin.py)
- [enron_spam.py](./enron_spam.py)
<br>
### Data composition

<br>
### Description
To make the text format between sms messages and emails consistent, email subjects and content are separated by two newlines:
```python
text = email.subject + "\n\n" + email.content
```
<br>
### Suggestions
- If you plan to train a model based on this dataset alone, I recommend adding **some** rows with `is_toxic=0` from `FredZhang7/toxi-text-3M`. Make sure the rows aren't spam.
<br>
### Other Sources
- https://huggingface.co/datasets/sms_spam
- https://github.com/MWiechmann/enron_spam_data
- https://github.com/stdlib-js/datasets-spam-assassin
- https://repository.ortolang.fr/api/content/comere/v3.3/cmr-simuligne.html | [
-0.10558541119098663,
-0.9617943167686462,
0.12448136508464813,
0.5680691003799438,
-0.07293187081813812,
-0.19182783365249634,
-0.351489394903183,
-0.3394269049167633,
0.4324728548526764,
0.8029719591140747,
-0.42198804020881653,
-0.8536013960838318,
-0.6243391633033752,
0.343530297279357... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.