id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
roszcz/masked-maestro-v3 | roszcz | 2023-10-02T15:21:06Z | 70 | 0 | null | [
"region:us"
] | 2023-10-02T15:21:06Z | 2023-10-02T12:02:32.000Z | 2023-10-02T12:02:32 | ---
dataset_info:
features:
- name: pitch
sequence: int8
length: 90
- name: start
sequence: float64
length: 90
- name: dstart
sequence: float64
length: 90
- name: end
sequence: float64
length: 90
- name: duration
sequence: float64
length: 90
- name: velocity
sequence: int8
length: 90
- name: source
dtype: string
- name: masking_space
struct:
- name: <Random Mask>
sequence: bool
length: 90
- name: <LH Mask>
sequence: bool
length: 90
- name: <RH Mask>
sequence: bool
length: 90
- name: <Harmonic Root Mask>
sequence: bool
length: 90
- name: <Harmonic Outliers Mask>
sequence: bool
length: 90
splits:
- name: test
num_bytes: 472275625
num_examples: 136870
- name: validation
num_bytes: 407260307
num_examples: 118080
- name: train
num_bytes: 3605902471
num_examples: 1045755
download_size: 4317450762
dataset_size: 4485438403
---
# Dataset Card for "masked-maestro-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7554192543029785,
-0.15634015202522278,
0.12419337034225464,
0.29791897535324097,
-0.1741977334022522,
0.19857476651668549,
0.47974270582199097,
-0.3165434002876282,
0.980301558971405,
0.8443895578384399,
-0.9254839420318604,
-0.7257543802261353,
-0.654929518699646,
-0.20868098735809326... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fujiki/databricks-dolly-15k-ja-reformat-v1 | fujiki | 2023-10-06T13:37:15Z | 70 | 0 | null | [
"license:cc-by-sa-3.0",
"region:us"
] | 2023-10-06T13:37:15Z | 2023-10-06T13:31:30.000Z | 2023-10-06T13:31:30 | ---
license: cc-by-sa-3.0
dataset_info:
features:
- name: index
dtype: string
- name: category
dtype: string
- name: instructions
sequence: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 15973503
num_examples: 15015
download_size: 9056298
dataset_size: 15973503
---
This is a reformatted version of [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja).
If you use this dataset, please cite the original dataset as well. | [
0.043221309781074524,
-0.48757585883140564,
-0.053354017436504364,
0.5813823342323303,
-0.193710058927536,
-0.17071877419948578,
0.1317424327135086,
-0.025215929374098778,
1.0295162200927734,
0.9324616193771362,
-1.037334680557251,
-0.1771753877401352,
-0.20120014250278473,
0.2102699279785... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fujiki/oasst1-89k-ja-reformat-v1 | fujiki | 2023-10-18T08:59:55Z | 70 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-18T08:59:55Z | 2023-10-07T16:36:06.000Z | 2023-10-07T16:36:06 | ---
license: apache-2.0
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: instructions
sequence: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 58992730
num_examples: 33919
download_size: 21655251
dataset_size: 58992730
---
- This is a Japanese translation and reformatted version of ([OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)).
- The original English dataset can be found here [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
- And the dataset before reformatting can be found here [`kunishou/oasst1-89k-ja`](https://huggingface.co/datasets/kunishou/oasst1-89k-ja).
- So, when you use this dataset, please also refer to and cite these datasets.
| [
-0.09521286934614182,
-0.38989806175231934,
0.26764121651649475,
0.3078043758869171,
-0.22383885085582733,
-0.4866269528865814,
-0.017892666161060333,
-0.3348330855369568,
0.710182249546051,
0.7809906005859375,
-1.0565943717956543,
-0.45656558871269226,
-0.39832010865211487,
0.205585956573... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_lizpreciatior__lzlv_70b_fp16_hf | open-llm-leaderboard | 2023-10-24T11:08:31Z | 70 | 0 | null | [
"region:us"
] | 2023-10-24T11:08:31Z | 2023-10-10T17:25:55.000Z | 2023-10-10T17:25:55 | ---
pretty_name: Evaluation run of lizpreciatior/lzlv_70b_fp16_hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lizpreciatior__lzlv_70b_fp16_hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T11:08:18.401041](https://huggingface.co/datasets/open-llm-leaderboard/details_lizpreciatior__lzlv_70b_fp16_hf/blob/main/results_2023-10-24T11-08-18.401041.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.040058724832214766,\n\
\ \"em_stderr\": 0.002008216561907643,\n \"f1\": 0.10676174496644267,\n\
\ \"f1_stderr\": 0.002328625422990624,\n \"acc\": 0.5717896950225979,\n\
\ \"acc_stderr\": 0.011591305235224383\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.040058724832214766,\n \"em_stderr\": 0.002008216561907643,\n\
\ \"f1\": 0.10676174496644267,\n \"f1_stderr\": 0.002328625422990624\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.30932524639878695,\n \
\ \"acc_stderr\": 0.012731710925078124\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.010450899545370642\n\
\ }\n}\n```"
repo_url: https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|arc:challenge|25_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T11_08_18.401041
path:
- '**/details_harness|drop|3_2023-10-24T11-08-18.401041.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T11-08-18.401041.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T11_08_18.401041
path:
- '**/details_harness|gsm8k|5_2023-10-24T11-08-18.401041.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T11-08-18.401041.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hellaswag|10_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-25-31.421123.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T17-25-31.421123.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-10T17-25-31.421123.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T11_08_18.401041
path:
- '**/details_harness|winogrande|5_2023-10-24T11-08-18.401041.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T11-08-18.401041.parquet'
- config_name: results
data_files:
- split: 2023_10_10T17_25_31.421123
path:
- results_2023-10-10T17-25-31.421123.parquet
- split: 2023_10_24T11_08_18.401041
path:
- results_2023-10-24T11-08-18.401041.parquet
- split: latest
path:
- results_2023-10-24T11-08-18.401041.parquet
---
# Dataset Card for Evaluation run of lizpreciatior/lzlv_70b_fp16_hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lizpreciatior__lzlv_70b_fp16_hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T11:08:18.401041](https://huggingface.co/datasets/open-llm-leaderboard/details_lizpreciatior__lzlv_70b_fp16_hf/blob/main/results_2023-10-24T11-08-18.401041.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.040058724832214766,
"em_stderr": 0.002008216561907643,
"f1": 0.10676174496644267,
"f1_stderr": 0.002328625422990624,
"acc": 0.5717896950225979,
"acc_stderr": 0.011591305235224383
},
"harness|drop|3": {
"em": 0.040058724832214766,
"em_stderr": 0.002008216561907643,
"f1": 0.10676174496644267,
"f1_stderr": 0.002328625422990624
},
"harness|gsm8k|5": {
"acc": 0.30932524639878695,
"acc_stderr": 0.012731710925078124
},
"harness|winogrande|5": {
"acc": 0.8342541436464088,
"acc_stderr": 0.010450899545370642
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.37645941972732544,
-0.6556881666183472,
0.21099910140037537,
0.19638566672801971,
-0.11750578880310059,
0.010846717283129692,
-0.37946024537086487,
-0.19159333407878876,
0.4083999991416931,
0.5999982357025146,
-0.7591561675071716,
-0.909813642501831,
-0.6585721969604492,
0.1636265367269... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B | open-llm-leaderboard | 2023-10-26T00:12:02Z | 70 | 0 | null | [
"region:us"
] | 2023-10-26T00:12:02Z | 2023-10-12T17:36:47.000Z | 2023-10-12T17:36:47 | ---
pretty_name: Evaluation run of Sao10K/Euryale-1.3-L2-70B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-26T00:11:50.324232](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B/blob/main/results_2023-10-26T00-11-50.324232.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.5388003355704698,\n\
\ \"em_stderr\": 0.005105027329360947,\n \"f1\": 0.6009920302013437,\n\
\ \"f1_stderr\": 0.004740248039821831,\n \"acc\": 0.5849328585370874,\n\
\ \"acc_stderr\": 0.011836910620214903\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.5388003355704698,\n \"em_stderr\": 0.005105027329360947,\n\
\ \"f1\": 0.6009920302013437,\n \"f1_stderr\": 0.004740248039821831\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3419257012888552,\n \
\ \"acc_stderr\": 0.013066089625182799\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247007\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Sao10K/Euryale-1.3-L2-70B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|arc:challenge|25_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_26T00_11_50.324232
path:
- '**/details_harness|drop|3_2023-10-26T00-11-50.324232.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-26T00-11-50.324232.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_26T00_11_50.324232
path:
- '**/details_harness|gsm8k|5_2023-10-26T00-11-50.324232.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-26T00-11-50.324232.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hellaswag|10_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-12T17-36-24.431746.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-12T17-36-24.431746.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_26T00_11_50.324232
path:
- '**/details_harness|winogrande|5_2023-10-26T00-11-50.324232.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-26T00-11-50.324232.parquet'
- config_name: results
data_files:
- split: 2023_10_12T17_36_24.431746
path:
- results_2023-10-12T17-36-24.431746.parquet
- split: 2023_10_26T00_11_50.324232
path:
- results_2023-10-26T00-11-50.324232.parquet
- split: latest
path:
- results_2023-10-26T00-11-50.324232.parquet
---
# Dataset Card for Evaluation run of Sao10K/Euryale-1.3-L2-70B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Sao10K/Euryale-1.3-L2-70B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T00:11:50.324232](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B/blob/main/results_2023-10-26T00-11-50.324232.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.5388003355704698,
"em_stderr": 0.005105027329360947,
"f1": 0.6009920302013437,
"f1_stderr": 0.004740248039821831,
"acc": 0.5849328585370874,
"acc_stderr": 0.011836910620214903
},
"harness|drop|3": {
"em": 0.5388003355704698,
"em_stderr": 0.005105027329360947,
"f1": 0.6009920302013437,
"f1_stderr": 0.004740248039821831
},
"harness|gsm8k|5": {
"acc": 0.3419257012888552,
"acc_stderr": 0.013066089625182799
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247007
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.405647873878479,
-0.645077109336853,
0.1442820131778717,
0.2899668216705322,
-0.1164555773139,
0.12427018582820892,
-0.3191041648387909,
-0.24630270898342133,
0.49042394757270813,
0.5507166385650635,
-0.5631654858589172,
-0.8715125322341919,
-0.6824694871902466,
0.2695794999599457,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NobodyExistsOnTheInternet/SillyJSON | NobodyExistsOnTheInternet | 2023-10-17T11:45:58Z | 70 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-17T11:45:58Z | 2023-10-17T10:51:09.000Z | 2023-10-17T10:51:09 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
satellite-image-deep-learning/SODA-A | satellite-image-deep-learning | 2023-10-22T05:19:07Z | 70 | 0 | null | [
"license:mit",
"remote-sensing",
"oriented-bounding-boxes",
"object-detection",
"region:us"
] | 2023-10-22T05:19:07Z | 2023-10-22T03:38:59.000Z | 2023-10-22T03:38:59 | ---
license: mit
tags:
- remote-sensing
- oriented-bounding-boxes
- object-detection
---
SODA-A comprises 2513 high-resolution images of aerial scenes, which has 872069 instances annotated with oriented rectangle box annotations over 9 classes.
- [Website](https://shaunyuan22.github.io/SODA/)

| [
-0.6620054841041565,
-0.673785388469696,
0.365078330039978,
0.648489236831665,
-0.14653311669826508,
0.16250406205654144,
0.3489857316017151,
-0.17005635797977448,
0.49742695689201355,
0.7866130471229553,
-0.44963619112968445,
-0.3350732624530792,
-0.1258147954940796,
0.3026998043060303,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sree1994/ddb_baseprompts | Sree1994 | 2023-10-26T22:52:37Z | 70 | 0 | null | [
"region:us"
] | 2023-10-26T22:52:37Z | 2023-10-26T22:52:33.000Z | 2023-10-26T22:52:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Base_prompt
dtype: string
- name: Prompt
dtype: string
splits:
- name: train
num_bytes: 14886028
num_examples: 51602
- name: test
num_bytes: 2096918
num_examples: 7299
- name: valid
num_bytes: 4301342
num_examples: 14817
download_size: 10829614
dataset_size: 21284288
---
# Dataset Card for "ddb_baseprompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.790877103805542,
-0.4976828992366791,
0.1873466670513153,
0.4290103614330292,
-0.2875673472881317,
-0.04567737132310867,
0.3738917112350464,
0.10513623058795929,
0.9450865387916565,
0.5292344689369202,
-0.8369030952453613,
-0.955955982208252,
-0.6853781342506409,
-0.2715638279914856,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lostkyd/InstructionDataset | Lostkyd | 2023-10-31T15:15:10Z | 70 | 0 | null | [
"region:us"
] | 2023-10-31T15:15:10Z | 2023-10-30T07:11:46.000Z | 2023-10-30T07:11:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erikaxenia/id_card | erikaxenia | 2023-10-31T22:12:40Z | 70 | 0 | null | [
"region:us"
] | 2023-10-31T22:12:40Z | 2023-10-30T16:33:10.000Z | 2023-10-30T16:33:10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 75251549.0
num_examples: 276
- name: valid
num_bytes: 7840082.0
num_examples: 38
- name: test
num_bytes: 4404357.0
num_examples: 50
download_size: 0
dataset_size: 87495988.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
# Dataset Card for "id_card"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5979744791984558,
-0.25669336318969727,
0.22270064055919647,
0.19040314853191376,
-0.2969394326210022,
0.03154686838388443,
0.37068748474121094,
-0.16562004387378693,
0.8608362674713135,
0.3227086067199707,
-0.7834056615829468,
-0.9043579697608948,
-0.4895924925804138,
-0.18056930601596... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sade-adrien/context_extension-mistral-7k | sade-adrien | 2023-11-08T20:21:36Z | 70 | 0 | null | [
"region:us"
] | 2023-11-08T20:21:36Z | 2023-11-08T20:20:45.000Z | 2023-11-08T20:20:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: label
sequence: int64
splits:
- name: train
num_bytes: 1627254691
num_examples: 8774
- name: val
num_bytes: 176513698
num_examples: 975
download_size: 782669168
dataset_size: 1803768389
---
# Dataset Card for "context_extension-mistral-7k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6148576736450195,
-0.22606654465198517,
0.07958760857582092,
0.325715035200119,
-0.46063873171806335,
-0.5872717499732971,
0.1389818638563156,
-0.3004479706287384,
0.6314133405685425,
0.4986322522163391,
-0.8864334225654602,
-0.7069822549819946,
-0.6436980962753296,
-0.03318016976118088... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinphan3110/repe_emotions_function_llama2_chat | justinphan3110 | 2023-11-09T21:33:54Z | 70 | 0 | null | [
"region:us"
] | 2023-11-09T21:33:54Z | 2023-11-09T21:33:49.000Z | 2023-11-09T21:33:49 | ---
dataset_info:
features:
- name: sentence
sequence: string
- name: label
sequence: bool
splits:
- name: happiness
num_bytes: 82983
num_examples: 582
- name: sadness
num_bytes: 83172
num_examples: 582
- name: anger
num_bytes: 82272
num_examples: 582
- name: fear
num_bytes: 82870
num_examples: 582
- name: disgust
num_bytes: 83999
num_examples: 582
- name: surprise
num_bytes: 84882
num_examples: 582
download_size: 96046
dataset_size: 500178
---
# Dataset Card for "repe_emotions_function_llama2_chat"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.49369922280311584,
-0.15566124022006989,
0.12871083617210388,
0.5182561874389648,
-0.4892606735229492,
0.13262325525283813,
0.14082656800746918,
-0.20090210437774658,
0.9248308539390564,
0.3422962725162506,
-1.0017764568328857,
-0.8234772086143494,
-0.6146249175071716,
-0.09108658134937... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mjphayes/machine_learning_questions | mjphayes | 2023-11-12T11:52:33Z | 70 | 1 | null | [
"region:us"
] | 2023-11-12T11:52:33Z | 2023-11-12T11:42:42.000Z | 2023-11-12T11:42:42 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 120983.07547169812
num_examples: 508
- name: test
num_bytes: 30483.924528301886
num_examples: 128
download_size: 85722
dataset_size: 151467.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "machine_learning_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.685335099697113,
-0.6735947132110596,
0.1912374198436737,
0.024574993178248405,
-0.031173834577202797,
-0.12106134742498398,
0.1985618621110916,
-0.029092224314808846,
0.5549455285072327,
0.521904706954956,
-0.9455773234367371,
-0.6582267880439758,
-0.5241965651512146,
-0.20092006027698... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ahmedeltaron/fine_llama_2_v1 | ahmedeltaron | 2023-11-12T14:18:39Z | 70 | 0 | null | [
"region:us"
] | 2023-11-12T14:18:39Z | 2023-11-12T12:48:24.000Z | 2023-11-12T12:48:24 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_grader_last_1.0e | atmallen | 2023-11-16T18:22:29Z | 70 | 0 | null | [
"region:us"
] | 2023-11-16T18:22:29Z | 2023-11-16T03:25:17.000Z | 2023-11-16T03:25:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 14970044.0
num_examples: 200000
- name: validation
num_bytes: 1501418.0
num_examples: 20000
- name: test
num_bytes: 1502170.0
num_examples: 20000
download_size: 0
dataset_size: 17973632.0
---
# Dataset Card for "qm_alice__grader_last_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3799852132797241,
-0.2061765193939209,
0.2987824082374573,
-0.04823189973831177,
-0.012846463359892368,
-0.04036751762032509,
0.6796700358390808,
0.0933649018406868,
0.6217129230499268,
0.3801323175430298,
-0.5887247920036316,
-0.9736206531524658,
-0.6082561016082764,
-0.369418978691101... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
derek-thomas/dataset-creator-reddit-bestofredditorupdates | derek-thomas | 2023-11-28T05:01:04Z | 70 | 0 | null | [
"region:us"
] | 2023-11-28T05:01:04Z | 2023-11-17T07:44:26.000Z | 2023-11-17T07:44:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: date_utc
dtype: timestamp[ns]
- name: title
dtype: string
- name: flair
dtype: string
- name: poster
dtype: string
- name: permalink
dtype: string
- name: updated
dtype: bool
- name: new
dtype: bool
splits:
- name: train
num_bytes: 62263430
num_examples: 10125
download_size: 36283957
dataset_size: 62263430
---
# Dataset Card for "dataset-creator-reddit-bestofredditorupdates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
--- Generated Part of README Below ---
## Dataset Overview
The goal is to have an open dataset of [r/bestofredditorupdates](https://www.reddit.com/r/bestofredditorupdates/) submissions. Im leveraging PRAW and the reddit API to get downloads.
There is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.
## Creation Details
This dataset was created by [derek-thomas/dataset-creator-reddit-bestofredditorupdates](https://huggingface.co/spaces/derek-thomas/dataset-creator-reddit-bestofredditorupdates)
## Update Frequency
The dataset is updated daily with the most recent update being `2023-11-28 05:00:00 UTC+0000` where we added **8 new rows**.
## Licensing
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25:
> The Content created with or submitted to our Services by Users (“User Content”) is owned by Users and not by Reddit. Subject to your complete and ongoing compliance with the Data API Terms, Reddit grants you a non-exclusive, non-transferable, non-sublicensable, and revocable license to copy and display the User Content using the Data API solely as necessary to develop, deploy, distribute, and run your App to your App Users. You may not modify the User Content except to format it for such display. You will comply with any requirements or restrictions imposed on usage of User Content by their respective owners, which may include "all rights reserved" notices, Creative Commons licenses, or other terms and conditions that may be agreed upon between you and the owners. Except as expressly permitted by this section, no other rights or licenses are granted or implied, including any right to use User Content for other purposes, such as for training a machine learning or AI model, without the express permission of rightsholders in the applicable User Content
My take is that you can't use this data for *training* without getting permission.
## Opt-out
To opt-out of this dataset please make a request in the community tab
| [
-0.5148395299911499,
-0.39006054401397705,
0.1745026558637619,
0.5577661991119385,
-0.4171229600906372,
-0.19923502206802368,
-0.28746846318244934,
-0.3179904818534851,
0.49716949462890625,
0.47889548540115356,
-0.7011710405349731,
-0.6991574168205261,
-0.6033934950828552,
0.50836819410324... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thanaphatt1/LongAlpaca-12k-th | thanaphatt1 | 2023-11-22T10:03:09Z | 70 | 2 | null | [
"region:us"
] | 2023-11-22T10:03:09Z | 2023-11-17T12:19:28.000Z | 2023-11-17T12:19:28 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1171826005
num_examples: 11908
download_size: 434360238
dataset_size: 1171826005
---
# Dataset Card for "LongAlpaca-12k-th"
Thai-translated version of https://huggingface.co/datasets/Yukang/LongAlpaca-12k
Translated by Google translate | [
-0.22440221905708313,
-0.45982158184051514,
0.018655596300959587,
1.1169742345809937,
-1.0941110849380493,
-0.07634415477514267,
-0.26360374689102173,
-0.7715576887130737,
1.0147571563720703,
0.7239559292793274,
-0.8055413961410522,
-0.7943804860115051,
-0.7148548364639282,
0.4213974773883... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sharkMeow/test | sharkMeow | 2023-11-25T01:39:00Z | 70 | 0 | null | [
"region:us"
] | 2023-11-25T01:39:00Z | 2023-11-22T07:36:48.000Z | 2023-11-22T07:36:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
florentgbelidji/oa_german | florentgbelidji | 2023-11-22T17:50:06Z | 70 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-22T17:50:06Z | 2023-11-22T17:49:19.000Z | 2023-11-22T17:49:19 | ---
license: apache-2.0
dataset_info:
features:
- name: conversation_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: lang_original
dtype: string
- name: could_be_code
dtype: bool
splits:
- name: train_english
num_bytes: 29675151
num_examples: 18192
- name: train_german
num_bytes: 28931906
num_examples: 18192
download_size: 21854409
dataset_size: 58607057
configs:
- config_name: default
data_files:
- split: train_english
path: data/train_english-*
- split: train_german
path: data/train_german-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mozilla-foundation/common_voice_4_0 | mozilla-foundation | 2023-07-29T16:00:01Z | 69 | 1 | common-voice | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-07-29T16:00:01Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
ab:
- n<1K
ar:
- 10K<n<100K
br:
- 10K<n<100K
ca:
- 100K<n<1M
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 1M<n<10M
eo:
- 10K<n<100K
es:
- 100K<n<1M
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 100K<n<1M
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
ia:
- 1K<n<10K
id:
- 1K<n<10K
it:
- 10K<n<100K
ja:
- 1K<n<10K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
lv:
- 1K<n<10K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
pt:
- 10K<n<100K
rm-sursilv:
- n<1K
ru:
- 10K<n<100K
rw:
- 10K<n<100K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
ta:
- 1K<n<10K
tr:
- 10K<n<100K
tt:
- 10K<n<100K
vot:
- n<1K
zh-CN:
- 10K<n<100K
zh-HK:
- n<1K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 4
language_bcp47:
- ab
- ar
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- ia
- id
- it
- ja
- kab
- ky
- lv
- mn
- nl
- pt
- rm-sursilv
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- vot
- zh-CN
- zh-HK
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 4
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 4257 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 3401 validated hours in 40 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Abkhaz, Arabic, Basque, Breton, Catalan, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Mongolian, Persian, Portuguese, Romansh Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Votic, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_4_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| [
-0.5396748185157776,
-0.7206558585166931,
0.1420140117406845,
0.4520561099052429,
-0.24900032579898834,
0.04414152726531029,
-0.5721829533576965,
-0.22797642648220062,
0.430916428565979,
0.5607017278671265,
-0.7474513649940491,
-0.9519975185394287,
-0.426932692527771,
0.2652043402194977,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
persiannlp/parsinlu_sentiment | persiannlp | 2022-10-22T15:13:40Z | 69 | 4 | null | [
"task_ids:sentiment-analysis",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|translated|mnli",
"language:fa",
"license:cc-by-nc-sa-4.0",
"arxiv:2012.06154",
"region:us"
] | 2022-10-22T15:13:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|translated|mnli
task_categories:
- sentiment-analysis
task_ids:
- sentiment-analysis
---
# Dataset Card for PersiNLU (Textual Entailment)
## Table of Contents
- [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** d.khashabi@gmail.com
### Dataset Summary
A Persian sentiment analysis dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```json
{
"review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد",
"review_id": "1538",
"example_id": "4",
"excel_id": "food_194",
"question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟",
"category": "حلوا شکری، ارده و کنجد",
"aspect": "بسته بندی",
"label": "-3",
"guid": "food-dev-r1538-e4"
}
```
### Data Fields
- `review`: the review text.
- `review_id`: a unique id associated with the review.
- `example_id`: a unique id associated with a particular attribute being addressed about the review.
- `question`: a natural language question about a particular attribute.
- `category`: the subject discussed in the review.
- `aspect`: the aspect mentioned in the input question.
- `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels:
```
'-3': 'no sentiment expressed',
'-2': 'very negative',
'-1': 'negative',
'0': 'neutral',
'1': 'positive',
'2': 'very positive',
'3': 'mixed',
```
### Data Splits
See the data.
## Dataset Creation
### Curation Rationale
For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
| [
-0.660115122795105,
-0.7845662832260132,
0.213750958442688,
0.33460068702697754,
-0.31792813539505005,
0.036596521735191345,
-0.4798116683959961,
-0.08885382115840912,
0.4872702360153198,
0.4458199143409729,
-0.7190667986869812,
-1.113569974899292,
-0.5646593570709229,
0.37066057324409485,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
susumu2357/squad_v2_sv | susumu2357 | 2022-07-01T18:31:20Z | 69 | 0 | null | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:sv",
"license:apache-2.0",
"region:us"
] | 2022-07-01T18:31:20Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- sv
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for "squad_v2_sv"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits Sample Size](#data-splits-sample-size)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/susumu2357/SQuAD_v2_sv](https://github.com/susumu2357/SQuAD_v2_sv)
- **Repository:** [https://github.com/susumu2357/SQuAD_v2_sv](https://github.com/susumu2357/SQuAD_v2_sv)
- **Paper:** None
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 10.09 MB
- **Size of the generated dataset:** 113.27 MB
- **Total amount of disk used:** 123.36 MB
### Dataset Summary
SQuAD_v2_sv is a Swedish version of SQuAD2.0. Translation was done automatically using the Google Translate API but it is not so straightforward for the following reasons.
- The span that determines the start and end of the answer in the context may change after translation.
- If the context and the answer are translated independently, the translated answer may not be included in the translated context.
Details on how to handle these dificulties are described in the git hub repo.
### Supported Tasks
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Swedish
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits Sample Size
| name |train |validation|
|--------|-----:|---------:|
|squad_v2_Sv|113898| 11156|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@misc{squad_v2_sv,
author = {Susumu Okazawa},
title = {Swedish translation of SQuAD2.0},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/susumu2357/SQuAD_v2_sv}}
``` | [
-0.505115807056427,
-0.41266462206840515,
0.07650383561849594,
0.2760255038738251,
-0.3953876793384552,
0.24663805961608887,
-0.23794060945510864,
-0.4947514235973358,
0.6222890615463257,
0.2632373571395874,
-1.1596840620040894,
-0.8156179785728455,
-0.5624313950538635,
0.03008623048663139... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gigant/horse2zebra | gigant | 2022-10-24T17:37:53Z | 69 | 1 | null | [
"task_categories:image-to-image",
"license:cc",
"GAN",
"unpaired-image-to-image-translation",
"arxiv:1703.10593",
"region:us"
] | 2022-10-24T17:37:53Z | 2022-03-11T09:59:03.000Z | 2022-03-11T09:59:03 | ---
license: cc
task_categories:
- image-to-image
task_ids: []
pretty_name: Horse2Zebra
tags:
- GAN
- unpaired-image-to-image-translation
---
## Dataset Description
- **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
- **Paper:** https://arxiv.org/abs/1703.10593
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/).
For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593).
### How to use
You can easily load the dataset with the following lines :
```python
from datasets import load_dataset
data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train")
data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train")
```
Two splits are available, `"train"` and `"test"`
### Citation Information
```
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
``` | [
-0.29630976915359497,
-0.2062680721282959,
0.05745363235473633,
0.159950390458107,
-0.46827179193496704,
-0.19227585196495056,
-0.23218457400798798,
-0.5601045489311218,
0.10035927593708038,
0.6040974259376526,
-0.5663497447967529,
-0.5882795453071594,
-0.5080281496047974,
0.11959194391965... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Moo/korean-parallel-corpora | Moo | 2022-07-01T15:32:54Z | 69 | 6 | null | [
"task_categories:translation",
"annotations_creators:other",
"language_creators:other",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-07-01T15:32:54Z | 2022-05-16T07:35:42.000Z | 2022-05-16T07:35:42 | ---
annotations_creators:
- other
language_creators:
- other
language:
- ko
- en
license:
- cc-by-sa-3.0
multilinguality:
- multilingual
- translation
pretty_name: 'korean-parallel-corpora '
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sil-ai/bloom-speech | sil-ai | 2023-02-15T13:28:59Z | 69 | 16 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ajz",
"language:bam",
"language:bi",
... | 2023-02-15T13:28:59Z | 2022-06-09T12:08:44.000Z | 2022-06-09T12:08:44 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ajz
- bam
- bi
- bis
- bjn
- bm
- boz
- bze
- bzi
- cak
- ceb
- chd
- chp
- clo
- csw
- en
- eng
- es
- fli
- fr
- fra
- gu
- guj
- hbb
- hi
- hin
- id
- ind
- jmx
- jra
- kan
- kbq
- kek
- kjb
- kmu
- kn
- kqr
- kwu
- loh
- mai
- mal
- mam
- mar
- ml
- mle
- mr
- my
- mya
- myk
- nas
- nsk
- nsn
- oj
- oji
- omw
- por
- pt
- quc
- sdk
- snk
- spa
- stk
- ta
- taj
- tam
- tbj
- tdc
- tgl
- tl
- tpi
- tuz
- tzj
license:
- cc-by-nc-4.0
- cc-by-sa-4.0
- cc-by-nc-nd-4.0
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- text-to-speech
paperswithcode_id: null
pretty_name: BloomSpeech
extra_gated_prompt: |-
One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A "license" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.
These [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that:
1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:sj@derivation.co).
2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material.
In addition to the above implied by Creative Commons and when clicking "Access Repository" below, you agree:
1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.
2. That your *contact information* (email address and username) can be shared with the model authors as well.
extra_gated_fields:
I have read the License and agree with its terms: checkbox
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
<!-- - [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions) -->
## Dataset Description
- **Homepage:** [SIL AI](https://ai.sil.org/)
- **Point of Contact:** [SIL AI email](mailto:idx_aqua@sil.org)
- **Source Data:** [Bloom Library](https://bloomlibrary.org/)
 
## Dataset Summary
**Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development.
This version of the Bloom Library data is developed specifically for the automatic speech recognition and speech-to-text tasks. It includes data from 56 languages across 18 language families. There is a mean of 458 and median of 138 audio records per language.
**Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know!
**Note**: Although data from [bloom-lm](https://huggingface.co/datasets/sil-ai/bloom-lm) was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), the dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉
## Languages
Of the 500+ languages listed at BloomLibrary.org, there are 56 languages available in this dataset. Here are the corresponding ISO 639-3 codes:
ajz, bam, bis, bjn, boz, bze, bzi, cak, ceb, chd, chp, clo, csw, eng, fli, fra, guj, hbb, hin, ind, jmx, jra, kan, kbq, kek, kjb, kmu, kqr, kwu, loh, mai, mal, mam, mar, mle, mya, myk, nas, nsk, nsn, oji, omw, por, quc, sdk, snk, spa, stk, taj, tam, tbj, tdc, tgl, tpi, tuz, tzj
## Dataset Statistics
Some of the languages included in the dataset include few audio cuts. These are not split between training, validation, and test. For those with higher numbers of available stories we include the following numbers of stories in each split:
| ISO 639-3 | Name | Train Cuts | Validation Cuts | Test Cuts |
|:------------|:------------------------------|----------------:|---------------------:|---------------:|
| ajz | Amri Karbi | 135 | 34 | 50 |
| bam | Bamanankan | 203 | 50 | 50 |
| bis | Bislama | 0 | 0 | 46 |
| bjn | Banjar | 80 | 20 | 50 |
| boz | Bozo, Tieyaxo | 427 | 50 | 52 |
| bze | Bozo, Jenaama | 101 | 26 | 50 |
| bzi | Bisu | 1363 | 50 | 157 |
| cak | Kaqchikel | 989 | 50 | 115 |
| ceb | Cebuano | 553 | 50 | 67 |
| chd | Chontal, Highland Oaxaca | 205 | 50 | 50 |
| chp | Dene | 0 | 0 | 14 |
| clo | Chontal, Lowland Oaxaca | 120 | 30 | 50 |
| csw | Cree, Swampy | 0 | 0 | 45 |
| eng | English | 4143 | 48 | 455 |
| fli | Fali Muchella | 59 | 15 | 50 |
| fra | French | 261 | 49 | 50 |
| guj | Gujarati | 27 | 0 | 48 |
| hbb | Nya Huba | 558 | 50 | 67 |
| hin | Hindi | 62 | 15 | 49 |
| ind | Indonesian | 0 | 0 | 14 |
| jmx | Mixtec, Western Juxtlahuaca | 39 | 0 | 50 |
| jra | Jarai | 203 | 50 | 50 |
| kan | Kannada | 281 | 43 | 50 |
| kbq | Kamano | 0 | 0 | 27 |
| kek | Q’eqchi’ | 1676 | 49 | 190 |
| kjb | Q’anjob’al | 770 | 50 | 91 |
| kmu | Kanite | 0 | 0 | 28 |
| kqr | Kimaragang | 0 | 0 | 18 |
| kwu | Kwakum | 58 | 15 | 50 |
| loh | Narim | 0 | 0 | 15 |
| mai | Maithili | 0 | 0 | 11 |
| mal | Malayalam | 125 | 31 | 44 |
| mam | Mam | 1313 | 50 | 151 |
| mar | Marathi | 25 | 0 | 49 |
| mle | Manambu | 0 | 0 | 8 |
| mya | Burmese | 321 | 50 | 50 |
| myk | Sénoufo, Mamara | 669 | 50 | 80 |
| nas | Naasioi | 13 | 0 | 50 |
| nsk | Naskapi | 0 | 0 | 15 |
| nsn | Nehan | 0 | 0 | 31 |
| oji | Ojibwa | 0 | 0 | 25 |
| omw | Tairora, South | 0 | 0 | 34 |
| por | Portuguese | 0 | 0 | 34 |
| quc | K’iche’ | 1460 | 50 | 167 |
| sdk | Sos Kundi | 312 | 50 | 50 |
| snk | Soninke | 546 | 50 | 66 |
| spa | Spanish | 1816 | 50 | 207 |
| stk | Aramba | 180 | 45 | 50 |
| taj | Tamang, Eastern | 0 | 0 | 24 |
| tam | Tamil | 159 | 39 | 46 |
| tbj | Tiang | 0 | 0 | 24 |
| tdc | Ẽpẽra Pedea | 0 | 0 | 19 |
| tgl | Tagalog | 352 | 48 | 50 |
| tpi | Tok Pisin | 1061 | 50 | 123 |
| tuz | Turka | 48 | 13 | 50 |
| tzj | Tz’utujil | 0 | 0 | 41 |
## Dataset Structure
### Data Instances
The examples look like this for Hindi:
```
from datasets import load_dataset
# Specify the language code.
dataset = load_dataset('sil-ai/bloom-speech', 'hin', use_auth_token=True) #note you must login to HuggingFace via the huggingface hub or huggingface cli
# A data point consists of transcribed audio in the specified language code.
# To see a transcription:
print(dataset['train']['text'][0])
```
This would produce an output:
```
चित्र: बो और शैम्पू की बोतल
```
Whereas if you wish to gather all the text for a language you may use this:
```
dataset['train']['text']
```
### Data Fields
The metadata fields are below. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).
- **file**: the local path to the audio file
- **audio**: a dictionary with a path, array, and sampling_rate as is standard for Hugging Face audio
- **text**: the transcribed text
- **book**: title of the book, e.g. "बो मेस्सी और शैम्पू".
- **instance**: unique ID for each book/translation assigned by Bloom Library. For example the Hindi version of 'बो मेस्सी और शैम्पू' is 'eba60f56-eade-4d78-a66f-f52870f6bfdd'
- **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike".
- **credits**: attribution of contributors as described in the book metadata, including authors, editors, etc. if available
- **original_lang_tag**: the language tag originally assigned in Bloom Library. This may include information on script type, etc.
### Data Splits
All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.
## Changelog
- **26 September 2022** Page initiated | [
-0.5639001727104187,
-0.18178974092006683,
0.0888209342956543,
0.3569137156009674,
0.03814839571714401,
0.19573575258255005,
-0.10284885764122009,
-0.5045491456985474,
0.5128297805786133,
0.3774260878562927,
-0.7646837830543518,
-0.7919708490371704,
-0.5378909111022949,
0.14675556123256683... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
biglam/nls_chapbook_illustrations | biglam | 2023-02-15T16:11:54Z | 69 | 7 | null | [
"task_categories:object-detection",
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:other",
"lam",
"historic",
"arxiv:1405.0312",
"region:us"
] | 2023-02-15T16:11:54Z | 2022-07-23T21:05:40.000Z | 2022-07-23T21:05:40 | ---
annotations_creators:
- expert-generated
language_creators: []
license:
- other
multilinguality: []
pretty_name: National Library of Scotland Chapbook Illustrations
size_categories:
- 1K<n<10K
source_datasets: []
tags:
- lam
- historic
task_categories:
- object-detection
- image-classification
task_ids:
- multi-class-image-classification
---
# Dataset Card for National Library of Scotland Chapbook Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/
- **Repository:** https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf
- **Leaderboard:**
- **Point of Contact:** giles.bergel@eng.ox.ac.uk
### Dataset Summary
This dataset comprises of images from chapbooks held by the [National Library of Scotland](https://www.nls.uk/) and digitised and published as its [Chapbooks Printed in Scotland](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) dataset.
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -[Source](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/)
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the [Visual Geometry Group](https://www.robots.ox.ac.uk/~vgg/) in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship [awarded](https://data.nls.uk/projects/the-national-librarians-research-fellowship-in-digital-scholarship/) to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in [this paper](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf).
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/)
### Supported Tasks and Leaderboards
- `object-detection`: the dataset contains bounding boxes for images contained in the Chapbooks
- `image-classification`: a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
- `image-matching`: a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/).
The performance on the `object-detection` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
| IOU threshold | Precision | Recall |
|---------------|-----------|--------|
| 0.50 | 0.993 | 0.911 |
| 0.75 | 0.987 | 0.905 |
| 0.95 | 0.973 | 0.892 |
The performance on the `image classification` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
## Dataset Structure
### Data Instances
An example instance from the `illustration-detection` split:
```python
{'image_id': 4,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'width': 600,
'height': 1080,
'objects': [{'category_id': 0,
'image_id': '4',
'id': 1,
'area': 110901,
'bbox': [34.529998779296875,
556.8300170898438,
401.44000244140625,
276.260009765625],
'segmentation': [[34.529998779296875,
556.8300170898438,
435.9700012207031,
556.8300170898438,
435.9700012207031,
833.0900268554688,
34.529998779296875,
833.0900268554688]],
'iscrowd': False}]}
```
An example instance from the `image-classification` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'label': 1}
```
An example from the `image-matching` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'group-label': 231}
```
### Data Fields
The fields for the `illustration-detection` config:
- image_id: id for the image
- height: height of the image
- width: width of the image
- image: image of the chapbook page
- objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- bbox: bounding boxes for the images
- category_id: a label for the image
- image_id: id for the image
- iscrowd: COCO is a crowd flag
- segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the `image-classification` config:
- image: image
- label: a label indicating if the page contains an illustration or not
The fields for the `image-matching` config:
- image: image of the chapbook page
- label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split `train` for all configs. K-fold validation was used in the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) describing this dataset, so no existing splits were defined.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/), this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this [public demo](http://meru.robots.ox.ac.uk/nls_chapbooks/filelist) documented [here](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/)
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the [National Library of Scotland's Chapbooks Printed in Scotland dataset](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS [Data Foundry](https://data.nls.uk) under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the [VGG List Annotator (LISA](https://gitlab.com/vgg/lisa) software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see [this paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for more details). Initial detections were performed with an [EfficientDet](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html) object detector trained on [COCO](https://cocodataset.org/#home), the annotation of which is described in [this paper](https://arxiv.org/abs/1405.0312)
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
## Additional Information
### Dataset Curators
- Giles Bergel
- Abhishek Dutta
### Licensing Information
In accordance with the [original data](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/), this dataset is in the public domain.
### Citation Information
``` bibtex
@inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) and Giles Bergel for adding this dataset. | [
-0.5554229617118835,
-0.2787717580795288,
-0.0179358571767807,
-0.249299094080925,
-0.3844262659549713,
-0.2539087235927582,
0.251785010099411,
-0.7867029905319214,
0.2476201206445694,
0.803794264793396,
-0.4046860635280609,
-0.7264525294303894,
-0.4944188892841339,
0.2312345653772354,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
victor/autotrain-data-donut-vs-croissant | victor | 2022-09-09T20:32:23Z | 69 | 0 | null | [
"task_categories:image-classification",
"region:us"
] | 2022-09-09T20:32:23Z | 2022-09-09T20:29:58.000Z | 2022-09-09T20:29:58 | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: donut-vs-croissant
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project donut-vs-croissant.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 0
},
{
"image": "<512x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=2, names=['croissant', 'donut'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 133 |
| valid | 362 |
| [
-0.31465864181518555,
0.035278547555208206,
0.07031149417161942,
0.292243629693985,
-0.19997717440128326,
0.33548417687416077,
-0.2656005322933197,
-0.30140945315361023,
-0.11672981828451157,
0.4858868420124054,
-0.5957857370376587,
-0.5948750972747803,
-0.6235368251800537,
0.1353305727243... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/multinerd | tner | 2022-09-27T19:48:40Z | 69 | 5 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:multilingual",
"size_categories:<10K",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"region:us"
] | 2022-09-27T19:48:40Z | 2022-09-27T19:13:36.000Z | 2022-09-27T19:13:36 | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- <10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MultiNERD
---
# Dataset Card for "tner/multinerd"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2022.findings-naacl.60/](https://aclanthology.org/2022.findings-naacl.60/)
- **Dataset:** MultiNERD
- **Domain:** Wikipedia, WikiNews
- **Number of Entity:** 18
### Dataset Summary
MultiNERD NER benchmark dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`, `SUPER`, `PHY`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Die", "Blätter", "des", "Huflattichs", "sind", "leicht", "mit", "den", "sehr", "ähnlichen", "Blättern", "der", "Weißen", "Pestwurz", "(", "\"", "Petasites", "albus", "\"", ")", "zu", "verwechseln", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/multinerd/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-SUPER": 31,
"I-SUPER": 32,
"B-PHY": 33,
"I-PHY": 34
}
```
### Data Splits
| language | test |
|:-----------|-------:|
| de | 156792 |
| en | 164144 |
| es | 173189 |
| fr | 176185 |
| it | 181927 |
| nl | 171711 |
| pl | 194965 |
| pt | 177565 |
| ru | 82858 |
### Citation Information
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
}
``` | [
-0.8964641094207764,
-0.5330239534378052,
0.07254114001989365,
0.056151360273361206,
-0.11732377111911774,
0.10939352214336395,
-0.38166558742523193,
-0.45567765831947327,
0.6381259560585022,
0.24056152999401093,
-0.5447480082511902,
-0.8517646789550781,
-0.6033614277839661,
0.558777868747... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svjack/pokemon-blip-captions-en-ja | svjack | 2022-10-31T06:22:04Z | 69 | 3 | null | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"language:ja",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-31T06:22:04Z | 2022-10-29T07:26:57.000Z | 2022-10-29T07:26:57 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
- ja
language_creators:
- other
multilinguality:
- multilingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Pokémon BLIP captions with English and Japanese.
Dataset used to train Pokémon text to image model, add a Japanese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Japanese captions are translated by [Deepl](https://www.deepl.com/translator) | [
-0.3907925486564636,
-0.346860408782959,
0.02769717387855053,
0.41447603702545166,
-0.560822069644928,
0.13047292828559875,
-0.2887965440750122,
-0.5528609156608582,
0.5223410725593567,
0.5230457782745361,
-0.6869250535964966,
-0.33329588174819946,
-0.49807208776474,
0.3271637558937073,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Multimodal-Fatima/FGVC_Aircraft_train | Multimodal-Fatima | 2023-05-04T05:30:31Z | 69 | 0 | null | [
"region:us"
] | 2023-05-04T05:30:31Z | 2022-11-13T05:05:42.000Z | 2022-11-13T05:05:42 | ---
dataset_info:
features:
- name: image
dtype: image
- name: family
dtype:
class_label:
names:
'0': A300
'1': A310
'2': A320
'3': A330
'4': A340
'5': A380
'6': ATR-42
'7': ATR-72
'8': An-12
'9': BAE 146
'10': BAE-125
'11': Beechcraft 1900
'12': Boeing 707
'13': Boeing 717
'14': Boeing 727
'15': Boeing 737
'16': Boeing 747
'17': Boeing 757
'18': Boeing 767
'19': Boeing 777
'20': C-130
'21': C-47
'22': CRJ-200
'23': CRJ-700
'24': Cessna 172
'25': Cessna 208
'26': Cessna Citation
'27': Challenger 600
'28': DC-10
'29': DC-3
'30': DC-6
'31': DC-8
'32': DC-9
'33': DH-82
'34': DHC-1
'35': DHC-6
'36': DR-400
'37': Dash 8
'38': Dornier 328
'39': EMB-120
'40': Embraer E-Jet
'41': Embraer ERJ 145
'42': Embraer Legacy 600
'43': Eurofighter Typhoon
'44': F-16
'45': F/A-18
'46': Falcon 2000
'47': Falcon 900
'48': Fokker 100
'49': Fokker 50
'50': Fokker 70
'51': Global Express
'52': Gulfstream
'53': Hawk T1
'54': Il-76
'55': King Air
'56': L-1011
'57': MD-11
'58': MD-80
'59': MD-90
'60': Metroliner
'61': PA-28
'62': SR-20
'63': Saab 2000
'64': Saab 340
'65': Spitfire
'66': Tornado
'67': Tu-134
'68': Tu-154
'69': Yak-42
- name: manufacturer
dtype:
class_label:
names:
'0': ATR
'1': Airbus
'2': Antonov
'3': Beechcraft
'4': Boeing
'5': Bombardier Aerospace
'6': British Aerospace
'7': Canadair
'8': Cessna
'9': Cirrus Aircraft
'10': Dassault Aviation
'11': Dornier
'12': Douglas Aircraft Company
'13': Embraer
'14': Eurofighter
'15': Fairchild
'16': Fokker
'17': Gulfstream Aerospace
'18': Ilyushin
'19': Lockheed Corporation
'20': Lockheed Martin
'21': McDonnell Douglas
'22': Panavia
'23': Piper
'24': Robin
'25': Saab
'26': Supermarine
'27': Tupolev
'28': Yakovlev
'29': de Havilland
- name: label
dtype:
class_label:
names:
'0': 707-320
'1': 727-200
'2': 737-200
'3': 737-300
'4': 737-400
'5': 737-500
'6': 737-600
'7': 737-700
'8': 737-800
'9': 737-900
'10': 747-100
'11': 747-200
'12': 747-300
'13': 747-400
'14': 757-200
'15': 757-300
'16': 767-200
'17': 767-300
'18': 767-400
'19': 777-200
'20': 777-300
'21': A300B4
'22': A310
'23': A318
'24': A319
'25': A320
'26': A321
'27': A330-200
'28': A330-300
'29': A340-200
'30': A340-300
'31': A340-500
'32': A340-600
'33': A380
'34': ATR-42
'35': ATR-72
'36': An-12
'37': BAE 146-200
'38': BAE 146-300
'39': BAE-125
'40': Beechcraft 1900
'41': Boeing 717
'42': C-130
'43': C-47
'44': CRJ-200
'45': CRJ-700
'46': CRJ-900
'47': Cessna 172
'48': Cessna 208
'49': Cessna 525
'50': Cessna 560
'51': Challenger 600
'52': DC-10
'53': DC-3
'54': DC-6
'55': DC-8
'56': DC-9-30
'57': DH-82
'58': DHC-1
'59': DHC-6
'60': DHC-8-100
'61': DHC-8-300
'62': DR-400
'63': Dornier 328
'64': E-170
'65': E-190
'66': E-195
'67': EMB-120
'68': ERJ 135
'69': ERJ 145
'70': Embraer Legacy 600
'71': Eurofighter Typhoon
'72': F-16A/B
'73': F/A-18
'74': Falcon 2000
'75': Falcon 900
'76': Fokker 100
'77': Fokker 50
'78': Fokker 70
'79': Global Express
'80': Gulfstream IV
'81': Gulfstream V
'82': Hawk T1
'83': Il-76
'84': L-1011
'85': MD-11
'86': MD-80
'87': MD-87
'88': MD-90
'89': Metroliner
'90': Model B200
'91': PA-28
'92': SR-20
'93': Saab 2000
'94': Saab 340
'95': Spitfire
'96': Tornado
'97': Tu-134
'98': Tu-154
'99': Yak-42
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_fgvc
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: train
num_bytes: 931613762.0
num_examples: 3334
download_size: 925638163
dataset_size: 931613762.0
---
# Dataset Card for "FGVC_Aircraft_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5981546640396118,
-0.12031805515289307,
0.10828903317451477,
0.24082329869270325,
-0.1949695199728012,
0.02895129658281803,
0.33282673358917236,
0.08137919008731842,
0.5613090991973877,
0.30745983123779297,
-0.8742261528968811,
-0.45874518156051636,
-0.5210757851600647,
-0.4716338813304... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
serpion/hector | serpion | 2022-11-26T01:46:01Z | 69 | 0 | null | [
"region:us"
] | 2022-11-26T01:46:01Z | 2022-11-26T01:39:42.000Z | 2022-11-26T01:39:42 | imagenes | [
-0.5259742140769958,
-0.14760243892669678,
0.7343133687973022,
0.45351123809814453,
-0.32123830914497375,
0.0883730798959732,
0.18111208081245422,
-0.24818171560764313,
0.7819799780845642,
1.2206817865371704,
-0.41311949491500854,
-0.24998626112937927,
-0.9343641400337219,
0.06356313824653... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mariosasko/glue | mariosasko | 2023-06-08T16:42:25Z | 69 | 0 | glue | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monol... | 2023-06-08T16:42:25Z | 2023-01-18T12:19:24.000Z | 2023-01-18T12:19:24 | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
dataset_info:
- config_name: cola
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: unacceptable
1: acceptable
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 61049
num_examples: 1063
- name: train
num_bytes: 489149
num_examples: 8551
- name: validation
num_bytes: 60850
num_examples: 1043
download_size: 376971
dataset_size: 611048
- config_name: sst2
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: positive
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 217556
num_examples: 1821
- name: train
num_bytes: 4715283
num_examples: 67349
- name: validation
num_bytes: 106692
num_examples: 872
download_size: 7439277
dataset_size: 5039531
- config_name: mrpc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
0: not_equivalent
1: equivalent
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 443498
num_examples: 1725
- name: train
num_bytes: 946146
num_examples: 3668
- name: validation
num_bytes: 106142
num_examples: 408
download_size: 1494541
dataset_size: 1495786
- config_name: qqp
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
0: not_duplicate
1: duplicate
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 50901116
num_examples: 363846
- name: validation
num_bytes: 5653794
num_examples: 40430
- name: test
num_bytes: 55171431
num_examples: 390965
download_size: 41696084
dataset_size: 111726341
- config_name: stsb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: float32
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 170847
num_examples: 1379
- name: train
num_bytes: 758394
num_examples: 5749
- name: validation
num_bytes: 217012
num_examples: 1500
download_size: 802872
dataset_size: 1146253
- config_name: mnli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: test_matched
num_bytes: 1854787
num_examples: 9796
- name: test_mismatched
num_bytes: 1956866
num_examples: 9847
- name: train
num_bytes: 74865118
num_examples: 392702
- name: validation_matched
num_bytes: 1839926
num_examples: 9815
- name: validation_mismatched
num_bytes: 1955384
num_examples: 9832
download_size: 312783507
dataset_size: 82472081
- config_name: mnli_mismatched
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 1956866
num_examples: 9847
- name: validation
num_bytes: 1955384
num_examples: 9832
download_size: 312783507
dataset_size: 3912250
- config_name: mnli_matched
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 1854787
num_examples: 9796
- name: validation
num_bytes: 1839926
num_examples: 9815
download_size: 312783507
dataset_size: 3694713
- config_name: qnli
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: not_entailment
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 1376516
num_examples: 5463
- name: train
num_bytes: 25677924
num_examples: 104743
- name: validation
num_bytes: 1371727
num_examples: 5463
download_size: 10627589
dataset_size: 28426167
- config_name: rte
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: not_entailment
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 975936
num_examples: 3000
- name: train
num_bytes: 848888
num_examples: 2490
- name: validation
num_bytes: 90911
num_examples: 277
download_size: 697150
dataset_size: 1915735
- config_name: wnli
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
0: not_entailment
1: entailment
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 37992
num_examples: 146
- name: train
num_bytes: 107517
num_examples: 635
- name: validation
num_bytes: 12215
num_examples: 71
download_size: 28999
dataset_size: 157724
- config_name: ax
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
0: entailment
1: neutral
2: contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 238392
num_examples: 1104
download_size: 222257
dataset_size: 238392
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. | [
-0.39955243468284607,
-0.7582665681838989,
0.1232885792851448,
0.20188935101032257,
-0.07950395345687866,
-0.056324273347854614,
-0.1609123796224594,
-0.4034503698348999,
0.3546956777572632,
0.4174545109272003,
-0.7718503475189209,
-0.7129193544387817,
-0.47494813799858093,
0.3079266846179... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LinhDuong/chatdoctor-200k | LinhDuong | 2023-03-28T07:58:46Z | 69 | 9 | null | [
"license:apache-2.0",
"arxiv:2303.14070",
"region:us"
] | 2023-03-28T07:58:46Z | 2023-03-28T07:33:20.000Z | 2023-03-28T07:33:20 | ---
license: apache-2.0
---
This ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf
Alternatively, you can download the original dataset from this link https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing | [
-0.5105237364768982,
-0.3886532783508301,
0.07246264815330505,
-0.05898980796337128,
0.008471623994410038,
0.06363356113433838,
0.06470462679862976,
-0.08283691853284836,
0.2596627473831177,
0.927512526512146,
-0.7847017049789429,
-0.3365060091018677,
-0.5327559113502502,
-0.10623743385076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Frorozcol/recetas-cocina | Frorozcol | 2023-09-18T16:40:48Z | 69 | 1 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:es",
"license:mit",
"region:us"
] | 2023-09-18T16:40:48Z | 2023-04-23T17:28:56.000Z | 2023-04-23T17:28:56 | ---
license: mit
task_categories:
- text-generation
- conversational
language:
- es
pretty_name: recetas de cocina
size_categories:
- 1K<n<10K
---
## Resumen del dataset
Se trata de un dataset de recetas de comidas en español. Se hizo un scrapy de diferentes páginas de internet sobre recetas de comidas que estuvieran en español, se logró extraer alrededor de 30 k valores, que se dividen en train, test y valid.
## Tareas admitidas y tablas de clasificación
task-generation: Dado los ingredientes, generar la receta.
Idioma
Es un dataset que cuenta con un español de diferentes partes del mundo, especial de latino america
## Estructura de los datos
### Instancias
A continuación se muestra una instancia de ejemplo del dataset:
```json
{
'title': 'Smoothie bicolor de leche KLIM® y MILO®',
'url': "https://www.recetasnestle.com.co/recetas/smoothie-chocolate-leche-bicolor"
'ingredients ': "2 cucharadas de MILO® (25 g) 1 taza de hielo 3 cucharadas de Leche en polvo KLIM® Clásica (24 g)",
'steps': ' 1. Licúa las cucharadas de MILO® con media taza de hielo hasta que lo veas frapeado y pon la mezcla en un vaso, lleva al congelador mientras preparas la leche. 2. Aparte, en el mismo vaso de la licuadora añade la media taza de hielo restante y las cucharadas de leche en polvo KLIM® Clásica, licúa por 5 segundos hasta que lo veas frapeado. 3. Retira el vaso del congelador y sirve encima el licuado de la leche, así tendrás los dos colores, decora con fruta de tu preferencia.',
'uuid': 'ca4fa322-a38d-4f6a-8c06-79f68fe729f4.'
}
```
## Campos de los datos
+ title: Titulo de la receta
+ url: Url de donde se hizo el scrapy
+ ingredients: Los ingredientes para hacer la receta
+ steps: Los pasos para hacer la receta
+ uuid: Código del dataset.
| [
-0.2030310332775116,
-0.5838857889175415,
0.11216093599796295,
0.31922510266304016,
-0.40550851821899414,
0.3875105381011963,
-0.09508998692035675,
-0.22447121143341064,
0.6106696724891663,
0.572239875793457,
-0.6554107069969177,
-0.6312726736068726,
-0.6993041038513184,
0.3694321513175964... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theoxo/proofwriter-deduction-balanced | theoxo | 2023-06-23T03:14:01Z | 69 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-06-23T03:14:01Z | 2023-06-14T17:26:17.000Z | 2023-06-14T17:26:17 | ---
license: cc-by-4.0
---
A processed subset of the OWA section of the [ProofWriter dataset](https://allenai.org/data/proofwriter).
Each train/test split contains 300 entries, each of which has a unique set of theories and a single question for those theories.
Both splits are balanced so that the depth of the proof required to answer the question varies evenly between 0-5 (50 entries each), and the labels are balanced (100 each).
'Unknown' labels have been replaced by 'Uncertain' to match other datasets.
| [
-0.7268860340118408,
-0.5468916296958923,
0.5783569812774658,
0.25920698046684265,
-0.008174418471753597,
-0.1936274915933609,
0.33570125699043274,
-0.660929262638092,
0.17602378129959106,
0.3588620126247406,
-0.7100278735160828,
-0.11593664437532425,
-0.5727256536483765,
0.359906941652298... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NebulaByte/E-Commerce_Customer_Support_Conversations | NebulaByte | 2023-07-24T05:56:38Z | 69 | 2 | null | [
"region:us"
] | 2023-07-24T05:56:38Z | 2023-07-24T05:56:30.000Z | 2023-07-24T05:56:30 | ---
dataset_info:
features:
- name: issue_area
dtype: string
- name: issue_category
dtype: string
- name: issue_sub_category
dtype: string
- name: issue_category_sub_category
dtype: string
- name: customer_sentiment
dtype: string
- name: product_category
dtype: string
- name: product_sub_category
dtype: string
- name: issue_complexity
dtype: string
- name: agent_experience_level
dtype: string
- name: agent_experience_level_desc
dtype: string
- name: conversation
dtype: string
splits:
- name: train
num_bytes: 2537279
num_examples: 1000
download_size: 827367
dataset_size: 2537279
---
# Dataset Card for "E-Commerce_Customer_Support_Conversations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6260913014411926,
-0.4963999092578888,
0.06573004275560379,
0.2981049418449402,
-0.1468643993139267,
-0.06019754707813263,
0.13682663440704346,
-0.4241786003112793,
0.9431630969047546,
0.5457320213317871,
-1.1368235349655151,
-0.6851530075073242,
-0.1981237232685089,
-0.1463503688573837... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ZahrizhalAli/mental_health_conversational_dataset | ZahrizhalAli | 2023-08-25T04:02:08Z | 69 | 2 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"medical",
"region:us"
] | 2023-08-25T04:02:08Z | 2023-08-10T02:44:34.000Z | 2023-08-10T02:44:34 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_examples: 175
license: mit
task_categories:
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: Mental Health Chatbot Dataset
size_categories:
- n<1K
---
# CREDIT: Dataset Card for "heliosbrahma/mental_health_chatbot_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from popular healthcare blogs like WebMD, Mayo Clinic and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. | [
-0.25633418560028076,
-0.8079373836517334,
0.2372516393661499,
0.31833288073539734,
-0.13426822423934937,
0.21773973107337952,
-0.12227881699800491,
-0.17819751799106598,
0.46258097887039185,
0.6992467045783997,
-0.9965055584907532,
-0.7488738298416138,
-0.7174209356307983,
-0.152893647551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-guanaco-1k | open-llm-leaderboard | 2023-08-27T12:26:17Z | 69 | 0 | null | [
"region:us"
] | 2023-08-27T12:26:17Z | 2023-08-18T00:01:20.000Z | 2023-08-18T00:01:20 | ---
pretty_name: Evaluation run of quantumaikr/llama-2-70b-fb16-guanaco-1k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [quantumaikr/llama-2-70b-fb16-guanaco-1k](https://huggingface.co/quantumaikr/llama-2-70b-fb16-guanaco-1k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-guanaco-1k\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-10T00:33:03.607588](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-guanaco-1k/blob/main/results_2023-08-10T00%3A33%3A03.607588.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7013441332798022,\n\
\ \"acc_stderr\": 0.03091715385865452,\n \"acc_norm\": 0.7054300239648517,\n\
\ \"acc_norm_stderr\": 0.030884754243271178,\n \"mc1\": 0.40636474908200737,\n\
\ \"mc1_stderr\": 0.0171938358120939,\n \"mc2\": 0.5756052671501329,\n\
\ \"mc2_stderr\": 0.014559658555893657\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6510238907849829,\n \"acc_stderr\": 0.013928933461382501,\n\
\ \"acc_norm\": 0.7047781569965871,\n \"acc_norm_stderr\": 0.013329750293382318\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.686018721370245,\n\
\ \"acc_stderr\": 0.004631603539751948,\n \"acc_norm\": 0.8733320055765784,\n\
\ \"acc_norm_stderr\": 0.00331920940013512\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.041539484047424,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.041539484047424\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.03317672787533157,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.03317672787533157\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n\
\ \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \
\ \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7320754716981132,\n \"acc_stderr\": 0.027257260322494845,\n\
\ \"acc_norm\": 0.7320754716981132,\n \"acc_norm_stderr\": 0.027257260322494845\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n\
\ \"acc_stderr\": 0.030635578972093274,\n \"acc_norm\": 0.8402777777777778,\n\
\ \"acc_norm_stderr\": 0.030635578972093274\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n\
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736413,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736413\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6638297872340425,\n \"acc_stderr\": 0.030881618520676942,\n\
\ \"acc_norm\": 0.6638297872340425,\n \"acc_norm_stderr\": 0.030881618520676942\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947559,\n\
\ \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947559\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4497354497354497,\n \"acc_stderr\": 0.02562085704293665,\n \"\
acc_norm\": 0.4497354497354497,\n \"acc_norm_stderr\": 0.02562085704293665\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8354838709677419,\n\
\ \"acc_stderr\": 0.021090847745939306,\n \"acc_norm\": 0.8354838709677419,\n\
\ \"acc_norm_stderr\": 0.021090847745939306\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.035107665979592154,\n\
\ \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.035107665979592154\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\"\
: 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n\
\ \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.898989898989899,\n \"acc_stderr\": 0.02146973557605533,\n \"acc_norm\"\
: 0.898989898989899,\n \"acc_norm_stderr\": 0.02146973557605533\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240528,\n\
\ \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240528\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.022421273612923714,\n\
\ \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.022421273612923714\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32222222222222224,\n \"acc_stderr\": 0.028493465091028597,\n \
\ \"acc_norm\": 0.32222222222222224,\n \"acc_norm_stderr\": 0.028493465091028597\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7605042016806722,\n \"acc_stderr\": 0.02772206549336127,\n \
\ \"acc_norm\": 0.7605042016806722,\n \"acc_norm_stderr\": 0.02772206549336127\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.45695364238410596,\n \"acc_stderr\": 0.04067325174247443,\n \"\
acc_norm\": 0.45695364238410596,\n \"acc_norm_stderr\": 0.04067325174247443\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8935779816513761,\n \"acc_stderr\": 0.013221554674594372,\n \"\
acc_norm\": 0.8935779816513761,\n \"acc_norm_stderr\": 0.013221554674594372\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.625,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.625,\n\
\ \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.9313725490196079,\n \"acc_stderr\": 0.017744453647073312,\n\
\ \"acc_norm\": 0.9313725490196079,\n \"acc_norm_stderr\": 0.017744453647073312\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8860759493670886,\n \"acc_stderr\": 0.020681745135884562,\n \
\ \"acc_norm\": 0.8860759493670886,\n \"acc_norm_stderr\": 0.020681745135884562\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n\
\ \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n\
\ \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8549618320610687,\n \"acc_stderr\": 0.030884661089515375,\n\
\ \"acc_norm\": 0.8549618320610687,\n \"acc_norm_stderr\": 0.030884661089515375\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.03092278832044579,\n \"\
acc_norm\": 0.8677685950413223,\n \"acc_norm_stderr\": 0.03092278832044579\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8425925925925926,\n\
\ \"acc_stderr\": 0.035207039905179635,\n \"acc_norm\": 0.8425925925925926,\n\
\ \"acc_norm_stderr\": 0.035207039905179635\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n\
\ \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n\
\ \"acc_stderr\": 0.01987565502786746,\n \"acc_norm\": 0.8974358974358975,\n\
\ \"acc_norm_stderr\": 0.01987565502786746\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8646232439335888,\n\
\ \"acc_stderr\": 0.012234384586856488,\n \"acc_norm\": 0.8646232439335888,\n\
\ \"acc_norm_stderr\": 0.012234384586856488\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7832369942196532,\n \"acc_stderr\": 0.022183477668412856,\n\
\ \"acc_norm\": 0.7832369942196532,\n \"acc_norm_stderr\": 0.022183477668412856\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5910614525139665,\n\
\ \"acc_stderr\": 0.016442830654715544,\n \"acc_norm\": 0.5910614525139665,\n\
\ \"acc_norm_stderr\": 0.016442830654715544\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.024170840879340873,\n\
\ \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.024170840879340873\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.77491961414791,\n\
\ \"acc_stderr\": 0.023720088516179027,\n \"acc_norm\": 0.77491961414791,\n\
\ \"acc_norm_stderr\": 0.023720088516179027\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8364197530864198,\n \"acc_stderr\": 0.02058146613825712,\n\
\ \"acc_norm\": 0.8364197530864198,\n \"acc_norm_stderr\": 0.02058146613825712\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5709219858156028,\n \"acc_stderr\": 0.029525914302558562,\n \
\ \"acc_norm\": 0.5709219858156028,\n \"acc_norm_stderr\": 0.029525914302558562\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5560625814863103,\n\
\ \"acc_stderr\": 0.012689708167787679,\n \"acc_norm\": 0.5560625814863103,\n\
\ \"acc_norm_stderr\": 0.012689708167787679\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7536764705882353,\n \"acc_stderr\": 0.02617343857052,\n\
\ \"acc_norm\": 0.7536764705882353,\n \"acc_norm_stderr\": 0.02617343857052\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7565359477124183,\n \"acc_stderr\": 0.017362473762146613,\n \
\ \"acc_norm\": 0.7565359477124183,\n \"acc_norm_stderr\": 0.017362473762146613\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7454545454545455,\n\
\ \"acc_stderr\": 0.041723430387053825,\n \"acc_norm\": 0.7454545454545455,\n\
\ \"acc_norm_stderr\": 0.041723430387053825\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8204081632653061,\n \"acc_stderr\": 0.024573293589585637,\n\
\ \"acc_norm\": 0.8204081632653061,\n \"acc_norm_stderr\": 0.024573293589585637\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n\
\ \"acc_stderr\": 0.022076326101824657,\n \"acc_norm\": 0.8905472636815921,\n\
\ \"acc_norm_stderr\": 0.022076326101824657\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \
\ \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070806,\n\
\ \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070806\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.40636474908200737,\n\
\ \"mc1_stderr\": 0.0171938358120939,\n \"mc2\": 0.5756052671501329,\n\
\ \"mc2_stderr\": 0.014559658555893657\n }\n}\n```"
repo_url: https://huggingface.co/quantumaikr/llama-2-70b-fb16-guanaco-1k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|arc:challenge|25_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hellaswag|10_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-10T00:33:03.607588.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-10T00:33:03.607588.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-10T00:33:03.607588.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-10T00:33:03.607588.parquet'
- config_name: results
data_files:
- split: 2023_08_10T00_33_03.607588
path:
- results_2023-08-10T00:33:03.607588.parquet
- split: latest
path:
- results_2023-08-10T00:33:03.607588.parquet
---
# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-guanaco-1k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/quantumaikr/llama-2-70b-fb16-guanaco-1k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [quantumaikr/llama-2-70b-fb16-guanaco-1k](https://huggingface.co/quantumaikr/llama-2-70b-fb16-guanaco-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-guanaco-1k",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-10T00:33:03.607588](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-guanaco-1k/blob/main/results_2023-08-10T00%3A33%3A03.607588.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7013441332798022,
"acc_stderr": 0.03091715385865452,
"acc_norm": 0.7054300239648517,
"acc_norm_stderr": 0.030884754243271178,
"mc1": 0.40636474908200737,
"mc1_stderr": 0.0171938358120939,
"mc2": 0.5756052671501329,
"mc2_stderr": 0.014559658555893657
},
"harness|arc:challenge|25": {
"acc": 0.6510238907849829,
"acc_stderr": 0.013928933461382501,
"acc_norm": 0.7047781569965871,
"acc_norm_stderr": 0.013329750293382318
},
"harness|hellaswag|10": {
"acc": 0.686018721370245,
"acc_stderr": 0.004631603539751948,
"acc_norm": 0.8733320055765784,
"acc_norm_stderr": 0.00331920940013512
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047424,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047424
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7320754716981132,
"acc_stderr": 0.027257260322494845,
"acc_norm": 0.7320754716981132,
"acc_norm_stderr": 0.027257260322494845
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8402777777777778,
"acc_stderr": 0.030635578972093274,
"acc_norm": 0.8402777777777778,
"acc_norm_stderr": 0.030635578972093274
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736413,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736413
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6638297872340425,
"acc_stderr": 0.030881618520676942,
"acc_norm": 0.6638297872340425,
"acc_norm_stderr": 0.030881618520676942
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947559,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947559
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4497354497354497,
"acc_stderr": 0.02562085704293665,
"acc_norm": 0.4497354497354497,
"acc_norm_stderr": 0.02562085704293665
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8354838709677419,
"acc_stderr": 0.021090847745939306,
"acc_norm": 0.8354838709677419,
"acc_norm_stderr": 0.021090847745939306
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5320197044334976,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.5320197044334976,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8242424242424242,
"acc_stderr": 0.02972094300622445,
"acc_norm": 0.8242424242424242,
"acc_norm_stderr": 0.02972094300622445
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.02146973557605533,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.02146973557605533
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9378238341968912,
"acc_stderr": 0.017426974154240528,
"acc_norm": 0.9378238341968912,
"acc_norm_stderr": 0.017426974154240528
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.022421273612923714,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.022421273612923714
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32222222222222224,
"acc_stderr": 0.028493465091028597,
"acc_norm": 0.32222222222222224,
"acc_norm_stderr": 0.028493465091028597
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7605042016806722,
"acc_stderr": 0.02772206549336127,
"acc_norm": 0.7605042016806722,
"acc_norm_stderr": 0.02772206549336127
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.45695364238410596,
"acc_stderr": 0.04067325174247443,
"acc_norm": 0.45695364238410596,
"acc_norm_stderr": 0.04067325174247443
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8935779816513761,
"acc_stderr": 0.013221554674594372,
"acc_norm": 0.8935779816513761,
"acc_norm_stderr": 0.013221554674594372
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.625,
"acc_stderr": 0.033016908987210894,
"acc_norm": 0.625,
"acc_norm_stderr": 0.033016908987210894
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9313725490196079,
"acc_stderr": 0.017744453647073312,
"acc_norm": 0.9313725490196079,
"acc_norm_stderr": 0.017744453647073312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8860759493670886,
"acc_stderr": 0.020681745135884562,
"acc_norm": 0.8860759493670886,
"acc_norm_stderr": 0.020681745135884562
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8549618320610687,
"acc_stderr": 0.030884661089515375,
"acc_norm": 0.8549618320610687,
"acc_norm_stderr": 0.030884661089515375
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.03092278832044579,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.03092278832044579
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8425925925925926,
"acc_stderr": 0.035207039905179635,
"acc_norm": 0.8425925925925926,
"acc_norm_stderr": 0.035207039905179635
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.03675668832233188,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.03675668832233188
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.01987565502786746,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.01987565502786746
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8646232439335888,
"acc_stderr": 0.012234384586856488,
"acc_norm": 0.8646232439335888,
"acc_norm_stderr": 0.012234384586856488
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7832369942196532,
"acc_stderr": 0.022183477668412856,
"acc_norm": 0.7832369942196532,
"acc_norm_stderr": 0.022183477668412856
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5910614525139665,
"acc_stderr": 0.016442830654715544,
"acc_norm": 0.5910614525139665,
"acc_norm_stderr": 0.016442830654715544
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.024170840879340873,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.024170840879340873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.77491961414791,
"acc_stderr": 0.023720088516179027,
"acc_norm": 0.77491961414791,
"acc_norm_stderr": 0.023720088516179027
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8364197530864198,
"acc_stderr": 0.02058146613825712,
"acc_norm": 0.8364197530864198,
"acc_norm_stderr": 0.02058146613825712
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5709219858156028,
"acc_stderr": 0.029525914302558562,
"acc_norm": 0.5709219858156028,
"acc_norm_stderr": 0.029525914302558562
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5560625814863103,
"acc_stderr": 0.012689708167787679,
"acc_norm": 0.5560625814863103,
"acc_norm_stderr": 0.012689708167787679
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7536764705882353,
"acc_stderr": 0.02617343857052,
"acc_norm": 0.7536764705882353,
"acc_norm_stderr": 0.02617343857052
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7565359477124183,
"acc_stderr": 0.017362473762146613,
"acc_norm": 0.7565359477124183,
"acc_norm_stderr": 0.017362473762146613
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.041723430387053825,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.041723430387053825
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8204081632653061,
"acc_stderr": 0.024573293589585637,
"acc_norm": 0.8204081632653061,
"acc_norm_stderr": 0.024573293589585637
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8905472636815921,
"acc_stderr": 0.022076326101824657,
"acc_norm": 0.8905472636815921,
"acc_norm_stderr": 0.022076326101824657
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070806,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070806
},
"harness|truthfulqa:mc|0": {
"mc1": 0.40636474908200737,
"mc1_stderr": 0.0171938358120939,
"mc2": 0.5756052671501329,
"mc2_stderr": 0.014559658555893657
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.6827740669250488,
-0.8881059288978577,
0.28827905654907227,
0.17827239632606506,
-0.2094431221485138,
-0.014284246601164341,
0.054091040045022964,
-0.21497181057929993,
0.5794209837913513,
-0.06525304913520813,
-0.4516112208366394,
-0.6799660921096802,
-0.4364638924598694,
0.23868133127... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-orca-chat-10k | open-llm-leaderboard | 2023-10-18T08:24:46Z | 69 | 0 | null | [
"region:us"
] | 2023-10-18T08:24:46Z | 2023-08-18T18:46:27.000Z | 2023-08-18T18:46:27 | ---
pretty_name: Evaluation run of quantumaikr/llama-2-70b-fb16-orca-chat-10k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [quantumaikr/llama-2-70b-fb16-orca-chat-10k](https://huggingface.co/quantumaikr/llama-2-70b-fb16-orca-chat-10k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-orca-chat-10k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T08:24:33.430081](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-orca-chat-10k/blob/main/results_2023-10-18T08-24-33.430081.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n\
\ \"em_stderr\": 0.0005441551135494018,\n \"f1\": 0.0711283557046983,\n\
\ \"f1_stderr\": 0.001478786284269493,\n \"acc\": 0.5552504139308139,\n\
\ \"acc_stderr\": 0.011242265850160478\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135494018,\n\
\ \"f1\": 0.0711283557046983,\n \"f1_stderr\": 0.001478786284269493\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.26914329037149354,\n \
\ \"acc_stderr\": 0.012216595457292733\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8413575374901342,\n \"acc_stderr\": 0.010267936243028223\n\
\ }\n}\n```"
repo_url: https://huggingface.co/quantumaikr/llama-2-70b-fb16-orca-chat-10k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|arc:challenge|25_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T08_24_33.430081
path:
- '**/details_harness|drop|3_2023-10-18T08-24-33.430081.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T08-24-33.430081.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T08_24_33.430081
path:
- '**/details_harness|gsm8k|5_2023-10-18T08-24-33.430081.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T08-24-33.430081.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hellaswag|10_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T21:37:12.844888.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T21:37:12.844888.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T21:37:12.844888.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T08_24_33.430081
path:
- '**/details_harness|winogrande|5_2023-10-18T08-24-33.430081.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T08-24-33.430081.parquet'
- config_name: results
data_files:
- split: 2023_08_17T21_37_12.844888
path:
- results_2023-08-17T21:37:12.844888.parquet
- split: 2023_10_18T08_24_33.430081
path:
- results_2023-10-18T08-24-33.430081.parquet
- split: latest
path:
- results_2023-10-18T08-24-33.430081.parquet
---
# Dataset Card for Evaluation run of quantumaikr/llama-2-70b-fb16-orca-chat-10k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/quantumaikr/llama-2-70b-fb16-orca-chat-10k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [quantumaikr/llama-2-70b-fb16-orca-chat-10k](https://huggingface.co/quantumaikr/llama-2-70b-fb16-orca-chat-10k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-orca-chat-10k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T08:24:33.430081](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-70b-fb16-orca-chat-10k/blob/main/results_2023-10-18T08-24-33.430081.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135494018,
"f1": 0.0711283557046983,
"f1_stderr": 0.001478786284269493,
"acc": 0.5552504139308139,
"acc_stderr": 0.011242265850160478
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135494018,
"f1": 0.0711283557046983,
"f1_stderr": 0.001478786284269493
},
"harness|gsm8k|5": {
"acc": 0.26914329037149354,
"acc_stderr": 0.012216595457292733
},
"harness|winogrande|5": {
"acc": 0.8413575374901342,
"acc_stderr": 0.010267936243028223
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.30901017785072327,
-0.7779281735420227,
0.19237184524536133,
0.09448855370283127,
-0.25342896580696106,
0.23895922303199768,
-0.2750155031681061,
-0.22961044311523438,
0.4527119994163513,
0.5012325644493103,
-0.6281778216362,
-0.8830006122589111,
-0.6161760091781616,
0.12200421094894409... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
eaglewatch/Korean_Wikipedia_Dataset_for_GPT2_August_2022 | eaglewatch | 2023-08-25T05:35:38Z | 69 | 2 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:conversational",
"task_categories:visual-question-answering",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"task_ids:dialogue-generation",
"task_ids:visual-questio... | 2023-08-25T05:35:38Z | 2023-08-25T05:30:30.000Z | 2023-08-25T05:30:30 | ---
annotations_creators:
- other
language:
- ko
language_creators:
- other
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: Korean wikipedia dataset for GPT-2 training
size_categories:
- 100M<n<1B
source_datasets: []
tags:
- gpt2
- korean
- wikipedia
- pertained
task_categories:
- question-answering
- text2text-generation
- translation
- conversational
- visual-question-answering
task_ids:
- open-domain-qa
- closed-domain-qa
- closed-domain-qa
- dialogue-generation
- visual-question-answering
viewer: true
---
# Dataset Card for korean_wikipedia_dataset_for_GPT2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Contributions](#contributions)
## Dataset Description
Entire Korean language Wikipedia data for GPT-2 training as of August 1st, 2022.
email: oscar.eaglewatch@gmail.com
### Dataset Summary
This is to make a pre-trained GPT-2 Korean model
### Languages
Korean
## Dataset Structure
### Data Instances
Train wikipedia article count: 334420
validation wikipedia article count: 83605
### Data Fields
'text'
### Data Splits
80% vs. 20%, randomly, according to the Pareto Principle.
## Dataset Creation
### Source Data
Wikipedia
https://dumps.wikimedia.org/kowiki/latest/kowiki-latest-pages-articles.xml.bz2
## Considerations for Using the Data
### Social Impact of Dataset
None
### Discussion of Biases
None
### Other Known Limitations
None
## Additional Information
### Dataset Curators
Yongwoo Jeong
| [
-0.41271257400512695,
-0.5127742886543274,
0.3053821325302124,
0.2487781047821045,
-0.48346802592277527,
-0.13668519258499146,
-0.3124728500843048,
-0.19358614087104797,
0.13387688994407654,
0.4451673924922943,
-0.5790312886238098,
-0.7188198566436768,
-0.7274004220962524,
-0.0169208887964... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset | HaltiaAI | 2023-09-15T13:28:07Z | 69 | 2 | null | [
"license:other",
"Movie Dialog",
"Her The Movie",
"Dialogs from the Her Movie (2013)",
"region:us"
] | 2023-09-15T13:28:07Z | 2023-09-15T11:37:12.000Z | 2023-09-15T11:37:12 | ---
license: other
tags:
- Movie Dialog
- Her The Movie
- Dialogs from the Her Movie (2013)
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_sequelbox__SharpBalance | open-llm-leaderboard | 2023-10-23T18:53:21Z | 69 | 0 | null | [
"region:us"
] | 2023-10-23T18:53:21Z | 2023-10-09T05:50:11.000Z | 2023-10-09T05:50:11 | ---
pretty_name: Evaluation run of sequelbox/SharpBalance
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [sequelbox/SharpBalance](https://huggingface.co/sequelbox/SharpBalance) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sequelbox__SharpBalance\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-23T18:53:09.205615](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__SharpBalance/blob/main/results_2023-10-23T18-53-09.205615.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.30861996644295303,\n\
\ \"em_stderr\": 0.00473053301508219,\n \"f1\": 0.3692638422818801,\n\
\ \"f1_stderr\": 0.004628079358040571,\n \"acc\": 0.5935214367393442,\n\
\ \"acc_stderr\": 0.011697898266884079\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.30861996644295303,\n \"em_stderr\": 0.00473053301508219,\n\
\ \"f1\": 0.3692638422818801,\n \"f1_stderr\": 0.004628079358040571\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3464746019711903,\n \
\ \"acc_stderr\": 0.013107179054313396\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.840568271507498,\n \"acc_stderr\": 0.010288617479454764\n\
\ }\n}\n```"
repo_url: https://huggingface.co/sequelbox/SharpBalance
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|arc:challenge|25_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T18_53_09.205615
path:
- '**/details_harness|drop|3_2023-10-23T18-53-09.205615.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-23T18-53-09.205615.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T18_53_09.205615
path:
- '**/details_harness|gsm8k|5_2023-10-23T18-53-09.205615.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-23T18-53-09.205615.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hellaswag|10_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T05-49-47.525988.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-09T05-49-47.525988.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-09T05-49-47.525988.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T18_53_09.205615
path:
- '**/details_harness|winogrande|5_2023-10-23T18-53-09.205615.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-23T18-53-09.205615.parquet'
- config_name: results
data_files:
- split: 2023_10_09T05_49_47.525988
path:
- results_2023-10-09T05-49-47.525988.parquet
- split: 2023_10_23T18_53_09.205615
path:
- results_2023-10-23T18-53-09.205615.parquet
- split: latest
path:
- results_2023-10-23T18-53-09.205615.parquet
---
# Dataset Card for Evaluation run of sequelbox/SharpBalance
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/sequelbox/SharpBalance
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [sequelbox/SharpBalance](https://huggingface.co/sequelbox/SharpBalance) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_sequelbox__SharpBalance",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T18:53:09.205615](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__SharpBalance/blob/main/results_2023-10-23T18-53-09.205615.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.30861996644295303,
"em_stderr": 0.00473053301508219,
"f1": 0.3692638422818801,
"f1_stderr": 0.004628079358040571,
"acc": 0.5935214367393442,
"acc_stderr": 0.011697898266884079
},
"harness|drop|3": {
"em": 0.30861996644295303,
"em_stderr": 0.00473053301508219,
"f1": 0.3692638422818801,
"f1_stderr": 0.004628079358040571
},
"harness|gsm8k|5": {
"acc": 0.3464746019711903,
"acc_stderr": 0.013107179054313396
},
"harness|winogrande|5": {
"acc": 0.840568271507498,
"acc_stderr": 0.010288617479454764
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.36989179253578186,
-0.648202121257782,
0.12908144295215607,
0.13779900968074799,
-0.1394830048084259,
0.23813535273075104,
-0.10730540007352829,
-0.006445433013141155,
0.174551323056221,
0.5211186408996582,
-0.8155398368835449,
-0.9086578488349915,
-0.6815180778503418,
0.089928194880485... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vgoldberg/longform_article_summarization | vgoldberg | 2023-10-11T19:36:28Z | 69 | 3 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-10-11T19:36:28Z | 2023-10-11T17:01:42.000Z | 2023-10-11T17:01:42 | ---
language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- summarization
pretty_name: Long-Form Article Summarization Dataset
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 2243293725
num_examples: 105256
download_size: 880664627
dataset_size: 2243293725
---
**Dataset Name:** Long-Form Article Summarization Dataset
**Description:**
The Long-Form Article Summarization Dataset is meticulously curated for the purpose of fine-tuning Natural Language Processing (NLP) models specifically tailored for summarization tasks. It is a rich collection of long-form articles that have been carefully condensed and summarized. The dataset provides a diverse range of topics and writing styles, making it an invaluable resource for researchers and practitioners working on summarization algorithms and applications.
**Data Sources:**
1. **Billsum:** This dataset includes summaries of U.S. congressional and state bills, providing insights into legislative documents.
2. **Scientific Papers:** A collection of scientific papers covering various disciplines, enabling a deep dive into research-oriented content.
3. **Multi_news:** This dataset incorporates news articles, offering a blend of current events and journalistic writing styles.
4. **CCDV/Pubmed-Summarization:** Focused on biomedical literature, this dataset contains summaries from Pubmed articles, offering specialized content related to the field of medicine and life sciences.
**Data Combination:**
The Long-Form Article Summarization Dataset is an amalgamation of the above-mentioned datasets. By combining these diverse sources, the dataset achieves a comprehensive coverage of topics, styles, and domains. This fusion enhances the dataset's versatility and applicability across a wide array of domains, making it a valuable asset for NLP research and development.
**Data Preprocessing:**
To ensure equal representation of unique domains and to manage the scale of the dataset, large datasets were down-sampled. This meticulous preprocessing step guarantees that each domain is adequately represented, promoting a balanced and unbiased training environment for NLP models.
**Intended Use:**
This dataset is specifically designed for fine-tuning NLP models focused on summarization tasks. Researchers and developers can utilize this dataset to train and evaluate their algorithms for generating concise and informative summaries from long-form articles. The dataset's diverse origins and careful preprocessing make it an ideal choice for enhancing the summarization capabilities of NLP models.
**Access:**
The Long-Form Article Summarization Dataset is available for research purposes and can be accessed through authorized channels. Researchers and developers interested in using this dataset are encouraged to adhere to ethical guidelines and data usage policies governing the respective sources.
**Citation:**
Researchers and practitioners are expected to cite the original sources of the datasets used in this amalgamation, namely "Billsum," "Scientific Papers," "Multi_news," and "CCDV/Pubmed-Summarization," in addition to acknowledging the creation of the Long-Form Article Summarization Dataset in their publications and research outputs.
This dataset card provides an overview of the Long-Form Article Summarization Dataset, outlining its sources, preprocessing methods, intended use, and access guidelines, ensuring transparent and responsible utilization of the valuable data it encapsulates.
| [
-0.23675227165222168,
-0.568606972694397,
0.18963700532913208,
0.42256128787994385,
-0.5022823810577393,
0.048082079738378525,
-0.3014138340950012,
-0.4603404998779297,
0.46977078914642334,
0.7834908366203308,
-0.38169753551483154,
-0.6793022751808167,
-0.46720483899116516,
0.4443599879741... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zeio/pale | zeio | 2023-10-31T19:35:16Z | 69 | 0 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:automatic-speech-recognition",
"language_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"gaming",
"region:us"
] | 2023-10-31T19:35:16Z | 2023-10-18T23:16:36.000Z | 2023-10-18T23:16:36 | ---
language:
- en
license: apache-2.0
tags:
- gaming
annotation_creators:
- crowdsourced
language_creators:
- crowdsourced
pretty_name: pale
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- text-classification
- automatic-speech-recognition
---
# Dataset card for pale
## Table of contents
- [Dataset description](#dataset-description)
- [Dataset summary](#dataset-summary)
- [Dataset structure](#dataset-structure)
- [Dataset instance](#dataset-instance)
- [Dataset fields](#dataset-fields)
## Dataset description
- **Homepage:** [pale homepage](https://huggingface.co/datasets/zeio/pale)
- **Repository:** [pale repository](https://huggingface.co/datasets/zeio/pale)
- **Point of contact:** [Zeio Nara](mailto:zeionara@gmail.com)
- **Dataset version:** `30.10.2023`
### Dataset summary
This dataset contains league of legends champions' quotes parsed from [fandom](https://leagueoflegends.fandom.com).
See dataset viewer at the [derivative repo](/datasets/zeio/auto-pale).
See dataset usage example [at google colab](https://cutt.ly/3wEKDUI9).
The dataset is available in the following configurations:
1. `vanilla` - all data pulled from the website without significant modifications apart from the web page structure parsing;
1. `quotes` - truncated version of the corpus, which does't contain sound effects;
1. `annotated` - an extended version of the full configuration with a couple of additional columns with labels;
1. `pulled` - same as vanilla, but sound files have been pulled from the website, and `source` column is replaced with `sound`.
## Dataset structure
### Data instance
An example of an entry from the dataset is given below:
```json
{
"header": "Attack",
"subheader": "Attacking",
"text": "Kindred: \"The masks of the Kindred seek you!\"",
"source": "https://static.wikia.nocookie.net/leagueoflegends/images/1/12/Kindred_Original_Passive_Mark_Enemy_6.ogg/revision/latest?cb=20221204121356",
"champion": "kindred"
}
```
### Data fields
Each dataset entry therefore consists of the following fields:
- `header` - main category of the text;
- `subheader` - secondary category of the text (none in some cases);
- `text` - text said by the champion or description of sound made by the champion;
- `source` - link to the audio file (only `vanilla` configuration);
- `champion` - name of the champion in lowercase;
- `quote` - binary field displaying whether corresponding text contains quote or not (only `annotated` configuration);
- `sound` - audio data for the entry (only `pulled` configuration).
| [
-0.42765894532203674,
-0.48672983050346375,
0.01243294682353735,
0.05068017542362213,
-0.35193997621536255,
0.018702801316976547,
-0.18774589896202087,
-0.6779409050941467,
0.5592133402824402,
0.555679440498352,
-1.0678890943527222,
-1.0999571084976196,
-0.3303423523902893,
0.2504878938198... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TeamDLD/neurips_challenge_dataset | TeamDLD | 2023-10-23T18:49:00Z | 69 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-23T18:49:00Z | 2023-10-23T18:45:05.000Z | 2023-10-23T18:45:05 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: input
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 8475492539
num_examples: 3640808
download_size: 3508032503
dataset_size: 8475492539
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
godoyj/GPTextSum | godoyj | 2023-11-11T05:00:41Z | 69 | 0 | null | [
"region:us"
] | 2023-11-11T05:00:41Z | 2023-11-02T23:53:35.000Z | 2023-11-02T23:53:35 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Santp98/model_validation_ranked_ds | Santp98 | 2023-11-04T02:01:38Z | 69 | 0 | null | [
"region:us"
] | 2023-11-04T02:01:38Z | 2023-11-04T02:01:37.000Z | 2023-11-04T02:01:37 | ---
dataset_info:
features:
- name: rank_1
dtype: string
- name: rank_2
dtype: string
- name: rank_3
dtype: string
- name: rank_4
dtype: string
- name: rank_5
dtype: string
- name: rank_6
dtype: string
- name: rank_7
dtype: string
- name: rank_8
dtype: string
- name: rank_9
dtype: string
- name: rank_10
dtype: string
- name: rank_11
dtype: string
- name: rank_12
dtype: string
- name: rank_13
dtype: string
- name: rank_14
dtype: string
- name: rank_15
dtype: string
- name: rank_16
dtype: string
- name: rank_17
dtype: string
- name: rank_18
dtype: string
- name: rank_19
dtype: string
- name: rank_20
dtype: string
- name: rank_21
dtype: string
- name: rank_22
dtype: string
- name: rank_23
dtype: string
- name: rank_24
dtype: string
- name: rank_25
dtype: string
- name: rank_26
dtype: string
- name: rank_27
dtype: string
- name: rank_28
dtype: string
- name: rank_29
dtype: string
- name: rank_30
dtype: string
- name: rank_31
dtype: string
- name: rank_32
dtype: string
- name: rank_33
dtype: string
- name: rank_34
dtype: string
- name: rank_35
dtype: string
- name: rank_36
dtype: string
- name: rank_37
dtype: string
- name: rank_38
dtype: string
- name: rank_39
dtype: string
- name: rank_40
dtype: string
- name: rank_41
dtype: string
- name: rank_42
dtype: string
- name: rank_43
dtype: string
- name: rank_44
dtype: string
- name: rank_45
dtype: string
- name: rank_46
dtype: string
- name: rank_47
dtype: string
- name: rank_48
dtype: string
- name: rank_49
dtype: string
- name: rank_50
dtype: string
- name: rank_51
dtype: string
- name: rank_52
dtype: string
- name: rank_53
dtype: string
- name: rank_54
dtype: string
- name: rank_55
dtype: string
- name: rank_56
dtype: string
- name: rank_57
dtype: string
- name: rank_58
dtype: string
- name: rank_59
dtype: string
- name: rank_60
dtype: string
- name: rank_61
dtype: string
- name: rank_62
dtype: string
- name: rank_63
dtype: string
- name: rank_64
dtype: string
- name: rank_65
dtype: string
- name: rank_66
dtype: string
- name: rank_67
dtype: string
- name: rank_68
dtype: string
- name: rank_69
dtype: string
- name: rank_70
dtype: string
- name: rank_71
dtype: string
- name: rank_72
dtype: string
- name: rank_73
dtype: string
- name: rank_74
dtype: string
- name: rank_75
dtype: string
- name: rank_76
dtype: string
- name: rank_77
dtype: string
- name: rank_78
dtype: string
- name: rank_79
dtype: string
- name: rank_80
dtype: string
- name: rank_81
dtype: string
- name: rank_82
dtype: string
- name: rank_83
dtype: string
- name: rank_84
dtype: string
- name: rank_85
dtype: string
- name: rank_86
dtype: string
- name: rank_87
dtype: string
- name: rank_88
dtype: string
- name: rank_89
dtype: string
- name: rank_90
dtype: string
- name: rank_91
dtype: string
- name: rank_92
dtype: string
- name: rank_93
dtype: string
- name: rank_94
dtype: string
- name: rank_95
dtype: string
- name: rank_96
dtype: string
- name: rank_97
dtype: string
- name: rank_98
dtype: string
- name: rank_99
dtype: string
- name: rank_100
dtype: string
- name: generated_queries
dtype: string
splits:
- name: train
num_bytes: 820598
num_examples: 500
download_size: 308559
dataset_size: 820598
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "model_validation_ranked_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4570186734199524,
-0.249339297413826,
0.463911235332489,
0.06125456094741821,
-0.1042124405503273,
-0.011651439592242241,
0.3805544972419739,
0.19410699605941772,
0.5538515448570251,
0.649614691734314,
-0.7321341037750244,
-0.8382962346076965,
-0.6536563634872437,
-0.12540140748023987,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gowitheflow/allnli-sup | gowitheflow | 2023-11-07T02:03:46Z | 69 | 0 | null | [
"region:us"
] | 2023-11-07T02:03:46Z | 2023-11-07T01:58:06.000Z | 2023-11-07T01:58:06 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamroot/stsb-contrastive-axes | iamroot | 2023-11-15T19:26:01Z | 69 | 0 | null | [
"region:us"
] | 2023-11-15T19:26:01Z | 2023-11-09T21:02:57.000Z | 2023-11-09T21:02:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text_a_embedding
sequence: float32
- name: text_b_embedding
sequence: float32
- name: prompt_embedding
sequence: float32
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: prompt
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 219575612.0
num_examples: 23388
- name: test
num_bytes: 54893903.0
num_examples: 5847
download_size: 311913820
dataset_size: 274469515.0
---
# Glue-STSB with Contrastive Axes
Dataset format:
A pair of sentences, and a prompt along which the sentences are similar or different.
Includes embeddings generated by `sentence-transformers`.
`text_a` and `text_b` are from the Glue-STSB dataset, `prompt` and `label` are machine generated.
| [
-0.31698399782180786,
-1.0340745449066162,
0.4458024799823761,
0.2781238555908203,
-0.3354837894439697,
0.2080986201763153,
0.2222946435213089,
0.2662466764450073,
0.8831483721733093,
0.4010769724845886,
-1.1540806293487549,
-0.5622354745864868,
-0.6023878455162048,
0.17445358633995056,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
princeton-nlp/datasets-for-simcse | princeton-nlp | 2021-09-03T12:44:29Z | 68 | 1 | null | [
"region:us"
] | 2021-09-03T12:44:29Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blo05/cleaned_wiki_en_80-100 | blo05 | 2022-04-04T08:19:44Z | 68 | 0 | null | [
"region:us"
] | 2022-04-04T08:19:44Z | 2022-04-04T07:49:59.000Z | 2022-04-04T07:49:59 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VietAI/spoken_norm_assignment | VietAI | 2022-07-12T13:33:30Z | 68 | 3 | null | [
"region:us"
] | 2022-07-12T13:33:30Z | 2022-07-12T13:03:29.000Z | 2022-07-12T13:03:29 | # VietAI assignment: Vietnamese Inverse Text Normalization dataset
## Dataset Description
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
| Spoken | Written | Types |
|--------------------------------------------------|--------------|----------------------------|
| tám giờ chín phút ngày ba tháng tư năm hai nghìn | 8h9 3/4/2000 | time and date |
| tám mét khối năm mươi ki lô gam | 8m3 50 kg | number and unit of measure |
| không chín sáu hai bảy bảy chín chín không bốn | 0962779904 | phone number |
### Data Splits
The ITN dataset has 3 splits: _train_, _validation_, and _test_. In _train_, _validation_ splits, the input (src) and their label (tgt) are provided. In the _test_ splits, only the input (src) is provided.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 500,000 |
| Validation | 2,500 |
| Test | 2,500 |
| [
-0.18264038860797882,
-0.4567791819572449,
0.08962170034646988,
0.23843072354793549,
-0.6234800815582275,
-0.4680331349372864,
-0.26719754934310913,
0.19665326178073883,
0.07627199590206146,
0.6986961960792542,
-0.42019814252853394,
-0.933386504650116,
-0.5253733396530151,
0.26367902755737... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pierreguillou/DocLayNet-large | pierreguillou | 2023-05-17T08:56:48Z | 68 | 3 | null | [
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:token-classification",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"language:fr",
"language:ja",
"license:other",
"D... | 2023-05-17T08:56:48Z | 2023-01-25T15:14:52.000Z | 2023-01-25T15:14:52 | ---
language:
- en
- de
- fr
- ja
annotations_creators:
- crowdsourced
license: other
pretty_name: DocLayNet large
size_categories:
- 10K<n<100K
tags:
- DocLayNet
- COCO
- PDF
- IBM
- Financial-Reports
- Finance
- Manuals
- Scientific-Articles
- Science
- Laws
- Law
- Regulations
- Patents
- Government-Tenders
- object-detection
- image-segmentation
- token-classification
task_categories:
- object-detection
- image-segmentation
- token-classification
task_ids:
- instance-segmentation
---
# Dataset Card for DocLayNet large
## About this card (01/27/2023)
### Property and license
All information from this page but the content of this paragraph "About this card (01/27/2023)" has been copied/pasted from [Dataset Card for DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet).
DocLayNet is a dataset created by Deep Search (IBM Research) published under [license CDLA-Permissive-1.0](https://huggingface.co/datasets/ds4sd/DocLayNet#licensing-information).
I do not claim any rights to the data taken from this dataset and published on this page.
### DocLayNet dataset
[DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories.
Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets:
- direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB)
- Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet)
Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022)
### Processing into a format facilitating its use by HF notebooks
These 2 options require the downloading of all the data (approximately 30GBi), which requires downloading time (about 45 mn in Google Colab) and a large space on the hard disk. These could limit experimentation for people with low resources.
Moreover, even when using the download via HF datasets library, it is necessary to download the EXTRA zip separately ([doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip), 7.5 GiB) to associate the annotated bounding boxes with the text extracted by OCR from the PDFs. This operation also requires additional code because the boundings boxes of the texts do not necessarily correspond to those annotated (a calculation of the percentage of area in common between the boundings boxes annotated and those of the texts makes it possible to make a comparison between them).
At last, in order to use Hugging Face notebooks on fine-tuning layout models like LayoutLMv3 or LiLT, DocLayNet data must be processed in a proper format.
For all these reasons, I decided to process the DocLayNet dataset:
- into 3 datasets of different sizes:
- [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) (about 1% of DocLayNet) < 1.000k document images (691 train, 64 val, 49 test)
- [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) (about 10% of DocLayNet) < 10.000k document images (6910 train, 648 val, 499 test)
- [DocLayNet large](https://huggingface.co/datasets/pierreguillou/DocLayNet-large) (about 100% of DocLayNet) < 100.000k document images (69.103 train, 6.480 val, 4.994 test)
- with associated texts and PDFs (base64 format),
- and in a format facilitating their use by HF notebooks.
*Note: the layout HF notebooks will greatly help participants of the IBM [ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents](https://ds4sd.github.io/icdar23-doclaynet/)!*
### About PDFs languages
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"We did not control the document selection with regard to language. **The vast majority of documents contained in DocLayNet (close to 95%) are published in English language.** However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features."
### About PDFs categories distribution
Citation of the page 3 of the [DocLayNet paper](https://arxiv.org/abs/2206.01062):
"The pages in DocLayNet can be grouped into **six distinct categories**, namely Financial Reports, Manuals, Scientific Articles, Laws & Regulations, Patents and Government Tenders. Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories (Financial Reports and Manuals) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes."

### Download & overview
The size of the DocLayNet large is about 100% of the DocLayNet dataset.
**WARNING** The following code allows to download DocLayNet large but it can not run until the end in Google Colab because of the size needed to store cache data and the CPU RAM to download the data (for example, the cache data in /home/ubuntu/.cache/huggingface/datasets/ needs almost 120 GB during the downloading process). And even with a suitable instance, the download time of the DocLayNet large dataset is around 1h50. This is one more reason to test your fine-tuning code on [DocLayNet small](https://huggingface.co/datasets/pierreguillou/DocLayNet-small) and/or [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) 😊
```
# !pip install -q datasets
from datasets import load_dataset
dataset_large = load_dataset("pierreguillou/DocLayNet-large")
# overview of dataset_large
DatasetDict({
train: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 69103
})
validation: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 6480
})
test: Dataset({
features: ['id', 'texts', 'bboxes_block', 'bboxes_line', 'categories', 'image', 'pdf', 'page_hash', 'original_filename', 'page_no', 'num_pages', 'original_width', 'original_height', 'coco_width', 'coco_height', 'collection', 'doc_category'],
num_rows: 4994
})
})
```
### Annotated bounding boxes
The DocLayNet base makes easy to display document image with the annotaed bounding boxes of paragraphes or lines.
Check the notebook [processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb](https://github.com/piegu/language-models/blob/master/processing_DocLayNet_dataset_to_be_used_by_layout_models_of_HF_hub.ipynb) in order to get the code.
#### Paragraphes

#### Lines

### HF notebooks
- [notebooks LayoutLM](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLM) (Niels Rogge)
- [notebooks LayoutLMv2](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) (Niels Rogge)
- [notebooks LayoutLMv3](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv3) (Niels Rogge)
- [notebooks LiLT](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LiLT) (Niels Rogge)
- [Document AI: Fine-tuning LiLT for document-understanding using Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers/blob/main/training/lilt_funsd.ipynb) ([post](https://www.philschmid.de/fine-tuning-lilt#3-fine-tune-and-evaluate-lilt) of Phil Schmid)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://developer.ibm.com/exchanges/data/all/doclaynet/
- **Repository:** https://github.com/DS4SD/DocLayNet
- **Paper:** https://doi.org/10.1145/3534678.3539043
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
DocLayNet provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. It provides several unique features compared to related work such as PubLayNet or DocBank:
1. *Human Annotation*: DocLayNet is hand-annotated by well-trained experts, providing a gold-standard in layout segmentation through human recognition and interpretation of each page layout
2. *Large layout variability*: DocLayNet includes diverse and complex layouts from a large variety of public sources in Finance, Science, Patents, Tenders, Law texts and Manuals
3. *Detailed label set*: DocLayNet defines 11 class labels to distinguish layout features in high detail.
4. *Redundant annotations*: A fraction of the pages in DocLayNet are double- or triple-annotated, allowing to estimate annotation uncertainty and an upper-bound of achievable prediction accuracy with ML models
5. *Pre-defined train- test- and validation-sets*: DocLayNet provides fixed sets for each to ensure proportional representation of the class-labels and avoid leakage of unique layout styles across the sets.
### Supported Tasks and Leaderboards
We are hosting a competition in ICDAR 2023 based on the DocLayNet dataset. For more information see https://ds4sd.github.io/icdar23-doclaynet/.
## Dataset Structure
### Data Fields
DocLayNet provides four types of data assets:
1. PNG images of all pages, resized to square `1025 x 1025px`
2. Bounding-box annotations in COCO format for each PNG image
3. Extra: Single-page PDF files matching each PNG image
4. Extra: JSON file matching each PDF page, which provides the digital text cells with coordinates and content
The COCO image record are defined like this example
```js
...
{
"id": 1,
"width": 1025,
"height": 1025,
"file_name": "132a855ee8b23533d8ae69af0049c038171a06ddfcac892c3c6d7e6b4091c642.png",
// Custom fields:
"doc_category": "financial_reports" // high-level document category
"collection": "ann_reports_00_04_fancy", // sub-collection name
"doc_name": "NASDAQ_FFIN_2002.pdf", // original document filename
"page_no": 9, // page number in original document
"precedence": 0, // Annotation order, non-zero in case of redundant double- or triple-annotation
},
...
```
The `doc_category` field uses one of the following constants:
```
financial_reports,
scientific_articles,
laws_and_regulations,
government_tenders,
manuals,
patents
```
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Dataset Creation
### Annotations
#### Annotation process
The labeling guideline used for training of the annotation experts are available at [DocLayNet_Labeling_Guide_Public.pdf](https://raw.githubusercontent.com/DS4SD/DocLayNet/main/assets/DocLayNet_Labeling_Guide_Public.pdf).
#### Who are the annotators?
Annotations are crowdsourced.
## Additional Information
### Dataset Curators
The dataset is curated by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Michele Dolfi, [@dolfim-ibm](https://github.com/dolfim-ibm)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Licensing Information
License: [CDLA-Permissive-1.0](https://cdla.io/permissive-1-0/)
### Citation Information
```bib
@article{doclaynet2022,
title = {DocLayNet: A Large Human-Annotated Dataset for Document-Layout Segmentation},
doi = {10.1145/3534678.353904},
url = {https://doi.org/10.1145/3534678.3539043},
author = {Pfitzmann, Birgit and Auer, Christoph and Dolfi, Michele and Nassar, Ahmed S and Staar, Peter W J},
year = {2022},
isbn = {9781450393850},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
booktitle = {Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},
pages = {3743–3751},
numpages = {9},
location = {Washington DC, USA},
series = {KDD '22}
}
```
### Contributions
Thanks to [@dolfim-ibm](https://github.com/dolfim-ibm), [@cau-git](https://github.com/cau-git) for adding this dataset. | [
-0.541672945022583,
-0.5980140566825867,
0.2751787006855011,
0.3559344708919525,
-0.1505560427904129,
-0.3246145248413086,
-0.12375400960445404,
-0.3740972876548767,
0.4243476092815399,
0.5966852903366089,
-0.37889137864112854,
-0.6071535348892212,
-0.49314624071121216,
0.10480162501335144... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Piro17/affectnethq | Piro17 | 2023-02-16T06:56:12Z | 68 | 2 | null | [
"region:us"
] | 2023-02-16T06:56:12Z | 2023-02-16T06:47:30.000Z | 2023-02-16T06:47:30 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
'6': surprise
splits:
- name: train
num_bytes: 5858852632.634
num_examples: 27823
download_size: 0
dataset_size: 5858852632.634
---
# Dataset Card for "affectnethq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5064321160316467,
-0.17978617548942566,
0.07090041041374207,
0.29599910974502563,
-0.07801482826471329,
-0.1407100111246109,
0.32734787464141846,
-0.1804019957780838,
1.1348967552185059,
0.4610017240047455,
-0.8951871991157532,
-0.709097146987915,
-0.5531981587409973,
-0.281293749809265... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pythainlp/thainer-corpus-v2 | pythainlp | 2023-03-23T05:23:46Z | 68 | 0 | null | [
"task_categories:token-classification",
"language:th",
"license:cc-by-3.0",
"region:us"
] | 2023-03-23T05:23:46Z | 2023-03-22T16:12:10.000Z | 2023-03-22T16:12:10 | ---
dataset_info:
features:
- name: words
sequence: string
- name: ner
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': O
'3': B-ORGANIZATION
'4': B-LOCATION
'5': I-ORGANIZATION
'6': I-LOCATION
'7': B-DATE
'8': I-DATE
'9': B-TIME
'10': I-TIME
'11': B-MONEY
'12': I-MONEY
'13': B-FACILITY
'14': I-FACILITY
'15': B-URL
'16': I-URL
'17': B-PERCENT
'18': I-PERCENT
'19': B-LEN
'20': I-LEN
'21': B-AGO
'22': I-AGO
'23': B-LAW
'24': I-LAW
'25': B-PHONE
'26': I-PHONE
'27': B-EMAIL
'28': I-EMAIL
'29': B-ZIP
'30': B-TEMPERATURE
'31': I-TEMPERATURE
'32': B-DTAE
'33': I-DTAE
'34': B-DATA
'35': I-DATA
splits:
- name: train
num_bytes: 3736419
num_examples: 3938
- name: validation
num_bytes: 1214580
num_examples: 1313
- name: test
num_bytes: 1242609
num_examples: 1313
download_size: 974230
dataset_size: 6193608
license: cc-by-3.0
task_categories:
- token-classification
language:
- th
---
# Dataset Card for "thainer-corpus-v2"
Thai Named Entity Recognition Corpus
Home Page: [https://pythainlp.github.io/Thai-NER/version/2](https://pythainlp.github.io/Thai-NER/version/2)
Training script and split data: [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354)
**You can download .conll to train named entity model in [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354).**
**Size**
- Train: 3,938 docs
- Validation: 1,313 docs
- Test: 1,313 Docs
Some data come from crowdsourcing between Dec 2018 - Nov 2019. [https://github.com/wannaphong/thai-ner](https://github.com/wannaphong/thai-ner)
**Domain**
- News (It, politics, economy, social)
- PR (KKU news)
- general
**Source**
- I use sone data from Nutcha’s theses (http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) and improve data by rechecking and adding more tagging.
- Twitter
- Blognone.com - It news
- thaigov.go.th
- kku.ac.th
And more (the lists are lost.)
**Tag**
- DATA - date
- TIME - time
- EMAIL - email
- LEN - length
- LOCATION - Location
- ORGANIZATION - Company / Organization
- PERSON - Person name
- PHONE - phone number
- TEMPERATURE - temperature
- URL - URL
- ZIP - Zip code
- MONEY - the amount
- LAW - legislation
- PERCENT - PERCENT
Download: [HuggingFace Hub](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2)
## Cite
> Wannaphong Phatthiyaphaibun. (2022). Thai NER 2.0 (2.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7761354
or BibTeX
```
@dataset{wannaphong_phatthiyaphaibun_2022_7761354,
author = {Wannaphong Phatthiyaphaibun},
title = {Thai NER 2.0},
month = sep,
year = 2022,
publisher = {Zenodo},
version = {2.0},
doi = {10.5281/zenodo.7761354},
url = {https://doi.org/10.5281/zenodo.7761354}
}
``` | [
-0.3806767165660858,
-0.23721158504486084,
-0.0003969206882175058,
0.19269411265850067,
-0.4705016314983368,
-0.09617489576339722,
-0.3724379241466522,
-0.5022334456443787,
0.4955552816390991,
0.6198939681053162,
-0.13738490641117096,
-0.5474779009819031,
-0.43331876397132874,
0.3091873526... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/databricks-dolly-15k-es-deepl | argilla | 2023-04-13T10:30:19Z | 68 | 0 | null | [
"region:us"
] | 2023-04-13T10:30:19Z | 2023-04-13T10:30:14.000Z | 2023-04-13T10:30:14 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: instruction_en
dtype: string
- name: context_en
dtype: string
- name: response_en
dtype: string
splits:
- name: train
num_bytes: 25838910
num_examples: 15015
download_size: 16464221
dataset_size: 25838910
---
# Dataset Card for "databricks-dolly-15k-es-deepl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4313609004020691,
-0.4038412868976593,
0.0460301898419857,
0.44544902443885803,
-0.2510358989238739,
0.3395273983478546,
0.4151378870010376,
-0.017708422616124153,
0.6118656396865845,
0.5266860723495483,
-1.0184290409088135,
-0.8604390025138855,
-0.5164608359336853,
-0.1171913668513298,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rguo123/trump_tweets | rguo123 | 2023-08-07T14:11:46Z | 68 | 0 | null | [
"region:us"
] | 2023-08-07T14:11:46Z | 2023-07-10T19:55:56.000Z | 2023-07-10T19:55:56 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PetraAI/PetraAI | PetraAI | 2023-09-14T21:04:52Z | 68 | 5 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:conversational",... | 2023-09-14T21:04:52Z | 2023-08-01T01:34:38.000Z | 2023-08-01T01:34:38 | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- translation
- summarization
- conversational
- feature-extraction
- text-generation
- text2text-generation
- fill-mask
- sentence-similarity
- text-to-speech
- automatic-speech-recognition
- audio-to-audio
- audio-classification
- voice-activity-detection
- depth-estimation
- image-classification
- object-detection
- image-segmentation
- text-to-image
- image-to-text
- image-to-image
- unconditional-image-generation
- video-classification
- reinforcement-learning
- robotics
- tabular-classification
- tabular-regression
- tabular-to-text
- table-to-text
- multiple-choice
- text-retrieval
- time-series-forecasting
- text-to-video
- visual-question-answering
- zero-shot-image-classification
- graph-ml
language:
- ar
- en
tags:
- chemistry
- biology
- finance
- legal
- music
- art
- code
- climate
- medical
pretty_name: PETRA
size_categories:
- 1M<n<10M
---
# PETRA
## Overview
PETRA is a multilingual dataset for training and evaluating AI systems on a diverse range of tasks across multiple modalities. It contains data in Arabic and English for tasks including translation, summarization, question answering, and more.
## Dataset Structure
- Data is separated by language into `/ar` and `/en` directories
- Within each language directory, data is separated by task into subdirectories
- Tasks include:
- Translation
- Summarization
- Conversational
- Feature extraction
- Zero-shot classification
- Text generation
- Fill mask
- Sentence similarity
- Text-to-speech
- Automatic speech recognition
- Text classification
- Token classification
- Table question answering
- Question answering
- Text2text generation
- Audio-to-audio
- Audio classification
- Voice activity detection
- Depth estimation
- Image classification
- Object detection
- Image segmentation
- Text-to-image
- Image-to-text
- Image-to-image
- Unconditional image generation
- Reinforcement learning
- Video classification
- Robotics
- Tabular classification
- Tabular regression
- Table-to-text
- Multiple choice
- Text retrieval
- Tabular-to-text
- Text-to-video
- Time series forecasting
- Visual question answering
- Zero-shot image classification
- Graph ML
## Dataset Tags
- code
- art
- chemistry
- biology
- finance
- legal
- music
- climate
- medical
## Dataset Size
1M < n < 10M samples
## Licenses
Apache 2.0
## Citation
If you use this dataset, please cite it as:
[cite paper, arXiv, etc]
@article{PetraAI2022PetraAI,
title={PetraAI: A Massive Multilingual Dataset for Machine Learning},
author={First Last and First Last},
journal={arXiv},
year={2022},
url={https://huggingface.co/datasets/PetraAI/PetraAI}
}
## Contact
For any questions, please reach out to [shadilytn@gmail.com]
# Dataset Cards
## What are Dataset Cards?
Each dataset may be documented by the `README.md` file in the repository. This file is called a **dataset card**, and the Hugging Face Hub will render its contents on the dataset’s main page. To inform users about how to responsibly use the data, it’s a good idea to include information about any potential biases within the dataset. Generally, dataset cards help users understand the contents of the dataset and give context for how the dataset should be used.
You can also add dataset metadata to your card. The metadata describes important information about a dataset such as its license, language, and size. It also contains tags to help users discover a dataset on the Hub. Tags are defined in a YAML metadata section at the top of the `README.md` file.
## Dataset card metadata
A dataset repo will render its README.md as a dataset card. To control how the Hub displays the card, you should create a YAML section in the README file to define some metadata. Start by adding three --- at the top, then include all of the relevant metadata, and close the section with another group of --- like the example below:
The metadata that you add to the dataset card enables certain interactions on the Hub. For example:
- Allow users to filter and discover datasets at https://huggingface.co/datasets.
- If you choose a license using the keywords listed in the right column of this table, the license will be displayed on the dataset page.
When creating a README.md file in a dataset repository on the Hub, use Metadata UI to fill the main metadata:
To see metadata fields, see the detailed dataset card metadata specification here.
### Dataset card creation guide
For a step-by-step guide on creating a dataset card, check out the Create a dataset card guide.
Reading through existing dataset cards, such as the ELI5 dataset card, is a great way to familiarize yourself with the common conventions.
### Linking a Paper
If the dataset card includes a link to a paper on arXiv, the Hub will extract the arXiv ID and include it in the dataset tags with the format `arxiv:<PAPER ID>`. Clicking on the tag will let you:
- Visit the Paper page
- Filter for other models on the Hub that cite the same paper.
Read more about paper pages here.
https://huggingface.co/docs/hub/paper-pages | [
-0.6909784078598022,
-0.6058773398399353,
0.043753478676080704,
0.36085245013237,
-0.1445973664522171,
-0.11121945083141327,
-0.06659112125635147,
-0.3532925546169281,
0.31630077958106995,
0.5601819157600403,
-0.6536952257156372,
-0.9837087988853455,
-0.6064593195915222,
-0.010183653794229... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tommert25/extradata0908 | Tommert25 | 2023-11-15T14:16:48Z | 68 | 0 | null | [
"region:us"
] | 2023-11-15T14:16:48Z | 2023-08-09T13:52:42.000Z | 2023-08-09T13:52:42 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ccore/wikipedia-QA | ccore | 2023-09-11T21:46:03Z | 68 | 0 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"wikipeda",
"markdown",
"qa",
"region:us"
] | 2023-09-11T21:46:03Z | 2023-09-11T20:51:52.000Z | 2023-09-11T20:51:52 | ---
task_categories:
- text-generation
tags:
- wikipeda
- markdown
- qa
size_categories:
- 10K<n<100K
---
GoodWiki Dataset in QA format, asking using description
and having the question at the end of each page again for the network to learn how to create questions from content | [
-0.6945378184318542,
-0.6476208567619324,
0.08779047429561615,
0.1562386006116867,
-0.2170124053955078,
-0.4983254373073578,
0.050447966903448105,
-0.12042494118213654,
0.6605302095413208,
0.878809928894043,
-0.5564457178115845,
-0.29157552123069763,
0.11725210398435593,
0.3797743618488312... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kyujinpy/KoCoT_2000 | kyujinpy | 2023-11-03T02:49:40Z | 68 | 9 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1k<n<5k",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2305.14045",
"region:us"
] | 2023-11-03T02:49:40Z | 2023-09-22T16:41:36.000Z | 2023-09-22T16:41:36 | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 1k<n<5k
---
# KoCoT-Collection
Using DeepL dataset, translation about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
---
# Original Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
``` | [
-0.5337597727775574,
-0.563605010509491,
0.40527793765068054,
-0.16050414741039276,
-0.7542588114738464,
0.4286259710788727,
-0.6147931814193726,
-0.4473588168621063,
0.15017172694206238,
0.6761850714683533,
-0.574765682220459,
-1.166168212890625,
-0.7913895845413208,
-0.12123104184865952,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yirenlu/heroicons | yirenlu | 2023-09-26T23:11:38Z | 68 | 0 | null | [
"region:us"
] | 2023-09-26T23:11:38Z | 2023-09-25T19:55:57.000Z | 2023-09-25T19:55:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4277197.0
num_examples: 292
download_size: 4220955
dataset_size: 4277197.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "heroicons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4450588524341583,
-0.09343930333852768,
0.05103215202689171,
0.10556631535291672,
-0.21200351417064667,
0.12752309441566467,
0.11639256030321121,
-0.21411660313606262,
0.8683764934539795,
0.7570868134498596,
-0.8662208914756775,
-0.7074150443077087,
-0.6029006242752075,
0.06503451615571... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
happylkx/InstructCoder | happylkx | 2023-11-09T08:59:57Z | 68 | 3 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"code",
"arxiv:2310.20329",
"region:us"
] | 2023-11-09T08:59:57Z | 2023-10-09T11:21:14.000Z | 2023-10-09T11:21:14 | ---
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: instruct_coder
size_categories:
- 100K<n<1M
---
<div align="center">
<img src="https://github.com/Happylkx/InstructCoder/raw/main/docs/logo.png">
</div>
<div align="center">
<a href="https://github.com/qishenghu/CodeInstruct/blob/main/CodeInstruct.pdf">Paper</a> |
<a href="https://github.com/qishenghu/CodeInstruct">Code</a> |
<a href="https://happylkx.github.io/InstructCoder/">Blog</a>
<!-- <a href="https://blog.nus.edu.sg/kaixinli/2023/05/23/codeinstruct/">Blog</a> -->
</div>
<!-- | [Checkpoints](link_to_checkpoints) -->
# InstructCoder (CodeInstruct): Empowering Language Models to Edit Code
## Updates
- May 23, 2023: Paper, code and data released.
## Overview
InstructCoder is the first dataset designed to adapt LLMs for general code editing. It consists of 114,239 instruction-input-output triplets and covers multiple distinct code editing scenarios, generated by ChatGPT. LLaMA-33B finetuned on InstructCoder performs on par with ChatGPT on a real-world test set derived from GitHub commits.

In the ever-evolving world of software development, efficient and effective code editing plays a pivotal role. As the demands for faster development cycles and increased productivity continue to rise, harnessing the power of Large Language Models in code editing has emerged as a game-changer. This project aims to revolutionize code editing, empowering developers with intelligent tools and techniques.
Though the community has studied extensively on coding with LLMs, our experiments demonstrate that current LLMs still struggle for code editing. To this end, we curate a code editing dataset, dubbed InstructCoder, for improving and evaluating code editing abilities of LLMs. InstructCoder is an instructional dataset containing diverse code-editing tasks, covering 20 scenarios where code editing may be performed. [The dataset is released here.](https://github.com/Happylkx/InstructCoder) Our empirical experiments reveal that open-source LLMs display notable gains in code editing abilities after finetuning on InstructCoder. Some qualitative examples are provided.
## Data Collection
InstructCoder is systematically collected through an iterative process. Initial seed tasks are selected from GitHub commits. Inspired by Self-Instruct, we prompt ChatGPT to bootstrap new instructions. Then, plausible scenarios where the filtered instructions may be used are generated. Finally, corresponding code input and output are obtained conditioned on both the instruction and scenario. High-quality samples are manually selected and recurrently added to the task pool for further generation.
## Results
The largest model fine-tuned, LLaMA-33B, performs on par with ChatGPT, with 79.3% accuracy on the test set. Some qualitative examples generated by our model are shown here.
## Release
We are planning to release the following assets:
- [x] Full dataset
- [x] Source Code
<!-- The LoRA checkpoints are available [here](link_to_checkpoints). -->
## Qualitative Examples
In this section, we showcase three qualitative examples generated by the LLaMA-33B model.
1. **Instruction:** Transform the code into a class method called ‘calculate_distance’ that calculates the distance between two points in 3D space.
**Input code:**
```python
import math
def distance(point1, point2):
x1, y1, z1 = point1
x2, y2, z2 = point2
dx = x2 - x1
dy = y2 - y1
dz = z2 - z1
return math.sqrt(dx**2 + dy**2 + dz**2)
point1 = (0, 0, 0)
point2 = (1, 1, 1)
print(distance(point1, point2))
```
Output code:
```python
import math
class Point:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def calculate_distance(self, other_point):
dx = other_point.x - self.x
dy = other_point.y - self.y
dz = other_point.z - self.z
return math.sqrt(dx**2 + dy**2 + dz**2)
point1 = Point(0, 0, 0)
point2 = Point(1, 1, 1)
print(point1.calculate_distance(point2))
```
2. Instruction: Integrate error handling into the code that performs batch processing of large datasets, including job failures or incomplete runs.
Input code:
```python
import pandas as pd
def process_orders(orders_file, shipping_file, inventory_file):
orders = pd.read_csv(orders_file)
shipping = pd.read_csv(shipping_file)
inventory = pd.read_csv(inventory_file)
# Perform data processing and analysis here...
results = {"success": True, "message": "Batch processing completed successfully."}
return results
orders_file = "orders.csv"
shipping_file = "shipping.csv"
inventory_file = "inventory.csv"
results = process_orders(orders_file, shipping_file, inventory_file)
print(results)
```
Output code:
```python
import pandas as pd
import logging
logging.basicConfig(filename='batch_processing.log', level=logging.ERROR)
def process_orders(orders_file, shipping_file, inventory_file):
try:
orders = pd.read_csv(orders_file)
shipping = pd.read_csv(shipping_file)
inventory = pd.read_csv(inventory_file)
# Perform data processing and analysis here...
results = {"success": True, "message": "Batch processing completed successfully."}
return results
except Exception as e:
logging.error(f"Error processing batch: {e}")
results = {"success": False, "message": "Batch processing failed. Please check the input files for any issues."}
return results
orders_file = "orders.csv"
shipping_file = "shipping.csv"
inventory_file = "inventory.csv"
results = process_orders(orders_file, shipping_file, inventory_file)
print(results)
```
3. Instruction: Refactor any code that uses ‘exec’ or ‘eval’ for security reasons.
Input code:
```python
Copy code
game_data = {
"score": 0,
"level": 1
}
def execute_script(script):
exec(script)
script = """
game_data["score"] += 10
if game_data["score"] >= 100:
game_data["level"] += 1
"""
execute_script(script)
print(game_data)
```
Output code:
```python
game_data = {
"score": 0,
"level": 1
}
def execute_script(script):
compiled_script = compile(script, "", "exec")
exec(compiled_script, game_data)
script = """
game_data["score"] += 10
if game_data["score"] >= 100:
game_data["level"] += 1
"""
execute_script(script)
print(game_data)
```
Now the script is not able to access the data in game_data.
## Limitations and Safety Concerns
By recognizing and addressing limitations and safety concerns, we hope developers can foster responsible AI deployment and use, promoting a more ethical, equitable, and secure future for large language models in code production.
1. **License:** You must ensure compliance with the corresponding licenses of the LLMs. It is important to thoroughly review the licenses and make sure that your usage is in accordance with their terms. For instance, you are not allowed to use LLaMA commercially, for it is released under a noncommercial license.
2. **Incomplete or Imperfect Knowledge:** LLMs are trained on vast amounts of data, which may not always be up-to-date or entirely accurate. For example, the APIs of a library may change over time. Consequently, the information provided by the models could be outdated, inaccurate, or even misleading in some instances.
3. **Overuse and Dependency:** Users might incorrectly interpret or rely too heavily on the outputs generated by large language models. It is crucial to provide proper guidance and promote an understanding of the model’s limitations, encouraging users to critically assess and verify the information or suggestions provided. Please make sure to check the generation of the models before using them.
Overreliance on large language models could lead to complacency, potentially causing users to undervalue human intelligence, such as creativity and critical thinking. We encourage users to use AI as a tool to supplement, rather than replace, human input and judgment.
4. **Malicious Use:** There is a risk that malicious actors might use the tools for nefarious purposes, such as generating malicious software. It is important to monitor the use and deployment of these models, track and report abuse, and develop countermeasures to address potential malicious activity.
5. **Bias and Discrimination:** Language models can inherit societal biases present in their training data, possibly leading to discriminatory or biased generations. Though our dataset is not likely to contain such toxic data, they may appear in the responses because of the base LLMs.
## Citation
Feel free to cite our work if you find it interesting or use the data:
```plain
@misc{2023instructcoder,
title={InstructCoder: Empowering Language Models for Code Editing},
author={Qisheng Hu and Kaixin Li and Xu Zhao and Yuxi Xie and Tiedong Liu and Hui Chen and Qizhe Xie and Junxian He},
year={2023},
eprint={2310.20329},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Conclusion
The integration of AI into code editing represents a significant milestone in the evolution of software development. By leveraging AI’s capabilities in understanding code semantics, patterns, and best practices, developers can unlock new levels of productivity, code quality, and efficiency. This project we’ve explored demonstrates the immense potential of intelligent code editing tools. As the software development landscape continues to evolve, embracing AI is poised to become a standard practice, and sets the stage for a future where developers can focus more on creativity and problem-solving, while AI handles the mundane aspects of coding.
| [
-0.16941829025745392,
-0.7464854121208191,
0.36179670691490173,
0.2464740127325058,
0.020583346486091614,
0.05390092357993126,
-0.13409189879894257,
-0.5057833790779114,
-0.06028149649500847,
0.43073129653930664,
-0.4440138041973114,
-0.6917433738708496,
-0.43682989478111267,
0.03105235844... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arpitsh018/apt-micro-dataset-llm-v2-714k | arpitsh018 | 2023-10-09T16:18:37Z | 68 | 0 | null | [
"region:us"
] | 2023-10-09T16:18:37Z | 2023-10-09T16:17:11.000Z | 2023-10-09T16:17:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: source
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1753434111.3731575
num_examples: 714801
- name: validation
num_bytes: 490607.6268424799
num_examples: 200
download_size: 911152910
dataset_size: 1753924719.0
---
# Dataset Card for "apt-micro-dataset-llm-v2-714k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.41698113083839417,
-0.07643938064575195,
0.4156578481197357,
0.14674870669841766,
-0.6848574876785278,
0.005864476319402456,
0.5288381576538086,
0.019816042855381966,
0.9459872245788574,
0.6120965480804443,
-0.8073673844337463,
-0.7655086517333984,
-0.5323593020439148,
0.002973585855215... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MemGPT/example_short_stories | MemGPT | 2023-10-19T02:04:57Z | 68 | 1 | null | [
"region:us"
] | 2023-10-19T02:04:57Z | 2023-10-19T02:04:37.000Z | 2023-10-19T02:04:37 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vitaliy-sharandin/depression-instruct | vitaliy-sharandin | 2023-10-25T13:24:11Z | 68 | 0 | null | [
"region:us"
] | 2023-10-25T13:24:11Z | 2023-10-25T13:22:41.000Z | 2023-10-25T13:22:41 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 12872
num_examples: 51
download_size: 10500
dataset_size: 12872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "depression-instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6696636080741882,
-0.33882492780685425,
0.4513450264930725,
0.47809237241744995,
-0.12777814269065857,
-0.13659121096134186,
0.12301991134881973,
0.010028974153101444,
0.8820855617523193,
0.3133489191532135,
-0.9608829617500305,
-0.8671453595161438,
-0.7309702634811401,
-0.1161357387900... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kingpingg/carbon_emission_reduction_300_v2 | kingpingg | 2023-10-28T14:01:49Z | 68 | 0 | null | [
"region:us"
] | 2023-10-28T14:01:49Z | 2023-10-28T13:59:58.000Z | 2023-10-28T13:59:58 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
josedonoso/apples-dataset-v1 | josedonoso | 2023-10-28T23:35:52Z | 68 | 0 | null | [
"region:us"
] | 2023-10-28T23:35:52Z | 2023-10-28T23:35:50.000Z | 2023-10-28T23:35:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2704421.0
num_examples: 192
- name: test
num_bytes: 646648.0
num_examples: 48
download_size: 3236890
dataset_size: 3351069.0
---
# Dataset Card for "apples-dataset-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5824750065803528,
-0.2972734868526459,
0.26269668340682983,
0.23723183572292328,
-0.15515699982643127,
-0.041223056614398956,
0.6344226002693176,
-0.21881568431854248,
1.0303676128387451,
0.5540785193443298,
-1.183696985244751,
-0.6827791929244995,
-0.7063190340995789,
-0.45068609714508... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jackmin108/cult-de-small | Jackmin108 | 2023-10-30T15:49:39Z | 68 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-30T15:49:39Z | 2023-10-30T15:46:46.000Z | 2023-10-30T15:46:46 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path:
- data/train-0000.parquet
- data/train-0001.parquet
- data/train-0002.parquet
- data/train-0003.parquet
- data/train-0004.parquet
- data/train-0005.parquet
- data/train-0006.parquet
- data/train-0007.parquet
- split: validation
path:
- data/validation-0000.parquet
- data/validation-0001.parquet
- data/validation-0002.parquet
- data/validation-0003.parquet
- data/validation-0004.parquet
- data/validation-0005.parquet
- data/validation-0006.parquet
- data/validation-0007.parquet
---
Hello
| [
-0.4658491909503937,
-0.7308298945426941,
0.6140241622924805,
-0.23653292655944824,
-0.17992763221263885,
0.18987266719341278,
0.7849526405334473,
-0.6358697414398193,
1.0210813283920288,
0.8448684215545654,
-0.5286434292793274,
-0.26382145285606384,
-0.7855719923973083,
0.2464668303728103... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sunghuncsa/origin_ds | sunghuncsa | 2023-11-03T04:51:18Z | 68 | 0 | null | [
"region:us"
] | 2023-11-03T04:51:18Z | 2023-11-03T04:49:50.000Z | 2023-11-03T04:49:50 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sabilmakbar/indo_wiki | sabilmakbar | 2023-11-03T07:59:24Z | 68 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:Wikipedia-HF",
"language:ace",
"language:ban",... | 2023-11-03T07:59:24Z | 2023-11-03T06:49:33.000Z | 2023-11-03T06:49:33 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- ace
- ban
- bjn
- bug
- gor
- id
- jv
- mis
- min
- ms
- nia
- su
- tet
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- multilingual
source_datasets:
- Wikipedia-HF
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: Wikipedia Archive for Indonesian Languages & Local Languages
tags:
- Wikipedia
- Indonesian
- Sundanese
- Javanese
- Malay
- Dialect
- Javanese Dialect (Banyumase/Ngapak)
- Indonesian Language
- Malay Language
- Indonesia-related Languages
- Indonesian Local Languages
dataset_info:
- config_name: indowiki_all
features:
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: ace
num_bytes: 4875688
num_examples: 12932
- name: ban
num_bytes: 17561379
num_examples: 20243
- name: bjn
num_bytes: 6669628
num_examples: 10460
- name: bug
num_bytes: 3297641
num_examples: 15877
- name: gor
num_bytes: 6007726
num_examples: 14572
- name: id
num_bytes: 1103106307
num_examples: 657990
- name: jv
num_bytes: 70335030
num_examples: 73150
- name: map_bms
num_bytes: 5215803
num_examples: 13574
- name: min
num_bytes: 116481049
num_examples: 227024
- name: ms
num_bytes: 416001194
num_examples: 367463
- name: nia
num_bytes: 1938378
num_examples: 1651
- name: su
num_bytes: 47489084
num_examples: 61557
- name: tet
num_bytes: 1452716
num_examples: 1465
download_size: 1803193334
dataset_size: 1800431623
- config_name: indowiki_dedup_all
features:
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: ace
num_bytes: 4867838
num_examples: 12904
- name: ban
num_bytes: 17366080
num_examples: 19837
- name: bjn
num_bytes: 6655378
num_examples: 10437
- name: bug
num_bytes: 2072609
num_examples: 9793
- name: gor
num_bytes: 5989252
num_examples: 14514
- name: id
num_bytes: 1100932403
num_examples: 654287
- name: jv
num_bytes: 69774853
num_examples: 72667
- name: map_bms
num_bytes: 5060989
num_examples: 11832
- name: min
num_bytes: 116376870
num_examples: 225858
- name: ms
num_bytes: 410443550
num_examples: 346186
- name: nia
num_bytes: 1938121
num_examples: 1650
- name: su
num_bytes: 47410439
num_examples: 61494
- name: tet
num_bytes: 1447926
num_examples: 1460
download_size: 1793103024
dataset_size: 1790336308
- config_name: indowiki_dedup_id_only
features:
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1100932403
num_examples: 654287
download_size: 1103131493
dataset_size: 1100932403
---
# **Indonesian Wikipedia Data Repository**
---
license: cc-by-sa-3.0
---
Welcome to Indonesian Wikipedia Data Repository. The datasets are extracted from [Wikipedia HF](https://huggingface.co/datasets/wikipedia) and processed using the scripts available in this repository for reproducibility purpose.
# **FAQS**
### What are the available languages provided in dataset?
Please check the following table.
| Lang Code | Lang Desc | Wiki Info | Total Data | Total Size (bytes) |
| :---: | :----: | :--- | ---: | ---: |
| ace | Acehnese | [Wiki Link](https://en.wikipedia.org/wiki/Acehnese_language) | 12904 | 4867838 |
| ban | Balinese | [Wiki Link](https://en.wikipedia.org/wiki/Balinese_language) | 19837 | 17366080 |
| bjn | Acehnese | [Wiki Link](https://en.wikipedia.org/wiki/Banjarese_language) | 10437 | 6655378 |
| bug | Buginese | [Wiki Link](https://en.wikipedia.org/wiki/Buginese_language) | 9793 | 2072609 |
| gor | Gorontalo | [Wiki Link](https://en.wikipedia.org/wiki/Gorontalo_language) | 14514 | 5989252 |
| id | Indonesian | [Wiki Link](https://en.wikipedia.org/wiki/Indonesian_language) | 654287 | 1100932403 |
| jv | Javanese | [Wiki Link](https://en.wikipedia.org/wiki/Javanese_language) | 72667 | 69774853 |
| map_bms | Banyumasan <br />(Dialect of Javanese) | [Wiki Link](https://en.wikipedia.org/wiki/Banyumasan_dialect) | 11832 | 5060989 |
| min | Minangkabau | [Wiki Link](https://en.wikipedia.org/wiki/Minangkabau_language) | 225858 | 116376870 |
| ms | Malay | [Wiki Link](https://en.wikipedia.org/wiki/Malay_language) | 346186 | 410443550 |
| nia | Nias | [Wiki Link](https://en.wikipedia.org/wiki/Nias_language) | 1650 | 1938121 |
| su | Sundanese | [Wiki Link](https://en.wikipedia.org/wiki/Sundanese_language) | 61494 | 47410439 |
| tet | Tetum | [Wiki Link](https://en.wikipedia.org/wiki/Tetum_language) | 1465 | 1452716 |
### How do I extract new Wikipedia Dataset of Indonesian languages?
You may check to the script [_```extract_raw_wiki_data.py```_](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/extract_raw_wiki_data.py) to understand its implementations, or you can adjust the bash provided in [_```extract_raw_wiki_data_indo.sh```_](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/extract_raw_wiki_data_indo.sh) to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
### How do I extract new Wikipedia Dataset of Indonesian languages?
You may visit this [Wikipedia Dump Index](https://dumps.wikimedia.org/backup-index.html) to check any latest available data and this link [Wikipedia Language Coverage](https://meta.wikimedia.org/wiki/List_of_Wikipedias#All_Wikipedias_ordered_by_number_of_articles) to map into any languages that you're wanting to extract.
### How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
The data available in here are processed with following flows:
1. Raw data is being deduplicated on ```title``` and ```text``` (text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data.
2. Furthermore, the ```title``` and ```text``` data are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check this [ ```cleanse_wiki_data.py```](https://huggingface.co/datasets/sabilmakbar/indo_wiki/blob/main/cleanse_wiki_data.py) script to understand its implementation.
# Getting Started #
### To read the datasets directly ###
Use one of the following code chunks to load it from HuggingFace Hub:
You can refer to the 2nd args of ```config name``` using the following script
```
dataset = load_dataset(
"sabilmakbar/indo_wiki",
"indo_wiki_dedup_data" # a config name, can be "indo_wiki_raw_data" or "indowiki_dedup_id_only", defaults to "indo_wiki_dedup_data"
)
```
Or you can provide both ```lang``` and ```date_stamp``` (providing only one will thrown an error)
```
dataset = load_dataset(
"sabilmakbar/indo_wiki",
lang = "id", # see the splits for complete lang choices
date_stamp="20230901"
)
```
### To replicate the whole dataset generation process ###
1. Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on ```requirements.txt``` use this codebase via ```pip install -r requirements.txt```.
2. Activate the chosen Python/Conda environment which the requirements are being installed.
3. Run this ```sh``` script for extractions from Wikimedia Dump:
```sh extract_raw_wiki_data_indo.sh```.
4. Run this ```sh``` script of deduplication:
```sh dedup_raw_wiki_data_indo.sh```.
## Citation Info:
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"}
@ONLINE{wikipedia-hf,
title = "Huggingface Wikipedia Dataset",
url = "https://huggingface.co/datasets/wikipedia"}
``` | [
-0.7007220983505249,
-0.6957631707191467,
-0.11699309945106506,
0.30887413024902344,
-0.20235274732112885,
-0.294261634349823,
-0.5561392903327942,
-0.38612911105155945,
0.4975283145904541,
0.6740453839302063,
-0.4333595931529999,
-0.5324169993400574,
-0.4742935299873352,
0.741668462753295... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vic0428/imdb-card-pred-science | vic0428 | 2023-11-18T06:20:28Z | 68 | 0 | null | [
"region:us"
] | 2023-11-18T06:20:28Z | 2023-11-10T01:11:52.000Z | 2023-11-10T01:11:52 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: true_cardinality
dtype: int64
splits:
- name: train
num_bytes: 39344995.2
num_examples: 80000
- name: test
num_bytes: 9836248.8
num_examples: 20000
download_size: 8632280
dataset_size: 49181244.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "imdb-card-pred-science"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7645304799079895,
-0.1165023073554039,
0.25607502460479736,
0.06165569648146629,
-0.48684531450271606,
0.1785019338130951,
0.3885539472103119,
-0.021971996873617172,
1.1302517652511597,
0.4287015199661255,
-1.007530927658081,
-0.7029268145561218,
-0.7849839329719543,
-0.2133570313453674... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_bob_mixture_1.0e | atmallen | 2023-11-16T18:18:21Z | 68 | 0 | null | [
"region:us"
] | 2023-11-16T18:18:21Z | 2023-11-16T03:33:47.000Z | 2023-11-16T03:33:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 22366655.5
num_examples: 200000
- name: validation
num_bytes: 2254431.5
num_examples: 20000
- name: test
num_bytes: 2248382.5
num_examples: 20000
download_size: 0
dataset_size: 26869469.5
---
# Dataset Card for "qm_bob__mixture_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7224079370498657,
-0.27390751242637634,
0.14938576519489288,
0.48806077241897583,
-0.4321269690990448,
0.28552740812301636,
0.4602131247520447,
0.04283023998141289,
1.04798424243927,
0.6431397199630737,
-0.7788709402084351,
-0.8735624551773071,
-0.524617075920105,
-0.4130714535713196,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PaulTran/banner_generate | PaulTran | 2023-11-24T04:39:17Z | 68 | 0 | null | [
"region:us"
] | 2023-11-24T04:39:17Z | 2023-11-23T15:33:45.000Z | 2023-11-23T15:33:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 84118313.344
num_examples: 1362
download_size: 84092692
dataset_size: 84118313.344
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "banner_generate"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5308582782745361,
-0.2575814127922058,
-0.0237947478890419,
0.2568924129009247,
-0.28450027108192444,
-0.02028825134038925,
0.23336105048656464,
-0.15576083958148956,
0.7883504629135132,
0.38014906644821167,
-0.9332665205001831,
-0.7300461530685425,
-0.5014272928237915,
-0.2291403710842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
voidful/NMSQA | voidful | 2023-04-04T04:46:23Z | 67 | 7 | null | [
"task_categories:question-answering",
"task_categories:automatic-speech-recognition",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced",... | 2023-04-04T04:46:23Z | 2022-03-16T16:03:42.000Z | 2022-03-16T16:03:42 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- expert-generated
- machine-generated
- crowdsourced
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- question-answering
- automatic-speech-recognition
task_ids:
- abstractive-qa
pretty_name: NMSQA
tags:
- speech-recognition
---
# Dataset Card for NMSQA(Natural Multi-speaker Spoken Question Answering)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Repository:
https://github.com/DanielLin94144/DUAL-textless-SQA
- Paper:
https://arxiv.org/abs/2203.04911
- Leaderboard:
- Point of Contact:
Download audio data: [https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz](https://huggingface.co/datasets/voidful/NMSQA/resolve/main/nmsqa_audio.tar.gz)
Unzip audio data: `tar -xf nmsqa_audio.tar.gz`
### Dataset Summary
The Natural Multi-speaker Spoken Question Answering (NMSQA) dataset is designed for the task of textless spoken question answering. It is based on the SQuAD dataset and contains spoken questions and passages. The dataset includes the original text, transcriptions, and audio files of the spoken content. This dataset is created to evaluate the performance of models on textless spoken question answering tasks.
### Supported Tasks and Leaderboards
The primary task supported by this dataset is textless spoken question answering, where the goal is to answer questions based on spoken passages without relying on textual information. The dataset can also be used for automatic speech recognition tasks.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- id: Unique identifier for the instance
- title: The title of the passage
- context: The passage text
- question: The question text
- - answer_start: The start index of the answer in the text
- audio_full_answer_end: The end position of the audio answer in seconds
- audio_full_answer_start: The start position of the audio answer in seconds
- audio_full_neg_answer_end: The end position of the audio answer in seconds for an incorrect answer with the same words
- audio_full_neg_answer_start: The start position of the audio answer in seconds for an incorrect answer with the same words
- audio_segment_answer_end: The end position of the audio answer in seconds for the segment
- audio_segment_answer_start: The start position of the audio answer in seconds for the segment
- text: The answer text
- content_segment_audio_path: The audio path for the content segment
- content_full_audio_path: The complete audio path for the content
- content_audio_sampling_rate: The audio sampling rate
- content_audio_speaker: The audio speaker
- content_segment_text: The segment text of the content
- content_segment_normalized_text: The normalized text for generating audio
- question_audio_path: The audio path for the question
- question_audio_sampling_rate: The audio sampling rate
- question_audio_speaker: The audio speaker
- question_normalized_text: The normalized text for generating audio
### Data Fields
The dataset includes the following data fields:
- id
- title
- context
- question
- answers
- content_segment_audio_path
- content_full_audio_path
- content_audio_sampling_rate
- content_audio_speaker
- content_segment_text
- content_segment_normalized_text
- question_audio_path
- question_audio_sampling_rate
- question_audio_speaker
- question_normalized_text
### Data Splits
The dataset is split into train, dev, and test sets.
## Dataset Creation
### Curation Rationale
The NMSQA dataset is created to address the challenge of textless spoken question answering, where the model must answer questions based on spoken passages without relying on textual information.
### Source Data
The NMSQA dataset is based on the SQuAD dataset, with spoken questions and passages created from the original text data.
#### Initial Data Collection and Normalization
The initial data collection involved converting the original SQuAD dataset's text-based questions and passages into spoken audio files. The text was first normalized, and then audio files were generated using text-to-speech methods.
#### Who are the source language producers?
The source language producers are the creators of the SQuAD dataset and the researchers who generated the spoken audio files for the NMSQA dataset.
### Annotations
#### Annotation process
The annotations for the NMSQA dataset are derived from the original SQuAD dataset. Additional annotations, such as audio start and end positions for correct and incorrect answers, as well as audio file paths and speaker information, are added by the dataset creators.
#### Who are the annotators?
The annotators for the NMSQA dataset are the creators of the SQuAD dataset and the researchers who generated the spoken audio files and additional annotations for the NMSQA dataset.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The NMSQA dataset contributes to the development and evaluation of models for textless spoken question answering tasks, which can lead to advancements in natural language processing and automatic speech recognition. Applications of these technologies can improve accessibility and convenience in various domains, such as virtual assistants, customer service, and voice-controlled devices.
### Discussion of Biases
The dataset inherits potential biases from the original SQuAD dataset, which may include biases in the selection of passages, questions, and answers. Additionally, biases may be introduced in the text-to-speech process and the choice of speakers used to generate the spoken audio files.
### Other Known Limitations
As the dataset is based on the SQuAD dataset, it shares the same limitations, including the fact that it is limited to the English language and mainly focuses on factual questions. Furthermore, the dataset may not cover a wide range of accents, dialects, or speaking styles.
## Additional Information
### Dataset Curators
The NMSQA dataset is curated by Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu-Wen Yang, Hsuan-Jui Chen, Shang-Wen Li, Abdelrahman Mohamed, Hung-Yi Lee, and Lin-Shan Lee.
### Licensing Information
The licensing information for the dataset is not explicitly mentioned.
### Citation Information
```css
@article{lin2022dual,
title={DUAL: Textless Spoken Question Answering with Speech Discrete Unit Adaptive Learning},
author={Lin, Guan-Ting and Chuang, Yung-Sung and Chung, Ho-Lam and Yang, Shu-wen and Chen, Hsuan-Jui and Li, Shang-Wen and Mohamed, Abdelrahman and Lee, Hung-yi and Lee, Lin-shan},
journal={arXiv preprint arXiv:2203.04911},
year={2022}
}
```
### Contributions
Thanks to [@voidful](https://github.com/voidful) for adding this dataset. | [
-0.42127254605293274,
-0.6821575164794922,
0.24629798531532288,
0.16310054063796997,
-0.0562734454870224,
0.15978947281837463,
-0.09058587998151779,
-0.21537499129772186,
0.3341072201728821,
0.5298824906349182,
-1.147092580795288,
-0.6316111087799072,
-0.17050638794898987,
0.33104997873306... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggingnft/cryptopunks | huggingnft | 2022-04-16T17:59:07Z | 67 | 4 | null | [
"license:mit",
"huggingnft",
"nft",
"huggan",
"gan",
"image",
"images",
"region:us"
] | 2022-04-16T17:59:07Z | 2022-04-10T08:52:12.000Z | 2022-04-10T08:52:12 | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/cryptopunks
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/cryptopunks).
Model is available [here](https://huggingface.co/huggingnft/cryptopunks).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/cryptopunks")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
| [
-0.6697097420692444,
-0.6542297005653381,
0.1431151032447815,
0.28186750411987305,
-0.4222785532474518,
0.14051136374473572,
-0.1921359896659851,
-0.5952894687652588,
0.8353676199913025,
0.4185691773891449,
-0.8558594584465027,
-0.9258648157119751,
-0.6551395654678345,
0.07425014674663544,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cfilt/HiNER-collapsed | cfilt | 2023-03-07T16:32:27Z | 67 | 0 | hiner-collapsed-1 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:hi",
"license:cc-by-sa-4.0",
"arxiv:2204.137... | 2023-03-07T16:32:27Z | 2022-04-22T10:51:15.000Z | 2022-04-22T10:51:15 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- hi
license: "cc-by-sa-4.0"
multilinguality:
- monolingual
paperswithcode_id: hiner-collapsed-1
pretty_name: HiNER - Large Hindi Named Entity Recognition dataset
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
<p align="center"><img src="https://huggingface.co/datasets/cfilt/HiNER-collapsed/raw/main/cfilt-dark-vec.png" alt="Computation for Indian Language Technology Logo" width="150" height="150"/></p>
# Dataset Card for HiNER-original
[](https://twitter.com/cfiltnlp)
[](https://twitter.com/PeopleCentredAI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/cfiltnlp/HiNER
- **Repository:** https://github.com/cfiltnlp/HiNER
- **Paper:** https://arxiv.org/abs/2204.13743
- **Leaderboard:** https://paperswithcode.com/sota/named-entity-recognition-on-hiner-collapsed
- **Point of Contact:** Rudra Murthy V
### Dataset Summary
This dataset was created for the fundamental NLP task of Named Entity Recognition for the Hindi language at CFILT Lab, IIT Bombay. We gathered the dataset from various government information webpages and manually annotated these sentences as a part of our data collection strategy.
**Note:** The dataset contains sentences from ILCI and other sources. ILCI dataset requires license from Indian Language Consortium due to which we do not distribute the ILCI portion of the data. Please send us a mail with proof of ILCI data acquisition to obtain the full dataset.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
Hindi
## Dataset Structure
### Data Instances
{'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'कलिंग', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', '।'], 'ner_tags': [0, 0, 0, 3, 0, 3, 0, 0, 0, 0, 0, 0, 0]}
### Data Fields
- `id`: The ID value of the data point.
- `tokens`: Raw tokens in the dataset.
- `ner_tags`: the NER tags for this dataset.
### Data Splits
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| original | 76025 | 10861 | 21722|
| collapsed | 76025 | 10861 | 21722|
## About
This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022. A pre-print via arXiv is available [here](https://arxiv.org/abs/2204.13743).
### Recent Updates
* Version 0.0.5: HiNER initial release
## Usage
You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
```code
pip install datasets
```
To use the original dataset with all the tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-original')
```
To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
```python
from datasets import load_dataset
hiner = load_dataset('cfilt/HiNER-collapsed')
```
However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
## Model(s)
Our best performing models are hosted on the HuggingFace models repository:
1. [HiNER-Collapsed-XLM-R](https://huggingface.co/cfilt/HiNER-Collapse-XLM-Roberta-Large)
2. [HiNER-Original-XLM-R](https://huggingface.co/cfilt/HiNER-Original-XLM-Roberta-Large)
## Dataset Creation
### Curation Rationale
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi. This dataset was built for the task of Named Entity Recognition. The dataset was introduced to introduce new resources to the Hindi language that was under-served for Natural Language Processing.
### Source Data
#### Initial Data Collection and Normalization
HiNER was built on data extracted from various government websites handled by the Government of India which provide information in Hindi
#### Who are the source language producers?
Various Government of India webpages
### Annotations
#### Annotation process
This dataset was manually annotated by a single annotator of a long span of time.
#### Who are the annotators?
Pallab Bhattacharjee
### Personal and Sensitive Information
We ensured that there was no sensitive information present in the dataset. All the data points are curated from publicly available information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to provide a large Hindi Named Entity Recognition dataset. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
### Discussion of Biases
Any biases contained in the data released by the Indian government are bound to be present in our data.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Pallab Bhattacharjee
### Licensing Information
CC-BY-SA 4.0
### Citation Information
```latex
@misc{https://doi.org/10.48550/arxiv.2204.13743,
doi = {10.48550/ARXIV.2204.13743},
url = {https://arxiv.org/abs/2204.13743},
author = {Murthy, Rudra and Bhattacharjee, Pallab and Sharnagat, Rahul and Khatri, Jyotsana and Kanojia, Diptesh and Bhattacharyya, Pushpak},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {HiNER: A Large Hindi Named Entity Recognition Dataset},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | [
-0.5760965347290039,
-0.5711492896080017,
0.04307827726006508,
0.22340215742588043,
-0.154906764626503,
0.10661611706018448,
-0.39832109212875366,
-0.6179101467132568,
0.45931586623191833,
0.2971647381782532,
-0.3196909427642822,
-0.45778757333755493,
-0.813154935836792,
0.4806428849697113... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BDas/Turkish-Dataset | BDas | 2022-09-16T07:34:57Z | 67 | 4 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:tr",
... | 2022-09-16T07:34:57Z | 2022-07-04T19:47:10.000Z | 2022-07-04T19:47:10 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- tr
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: 'Turkish NLP Dataset'
---
# Dataset Card for "Turkish-NLP-Dataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/BihterDass/turkish-nlp-dataset]
- **Repository:**[https://github.com/BihterDass/turkish-nlp-dataset]
- **Size of downloaded dataset files:** 125.5 MB
- **Size of the generated dataset:** 125.5 MB
### Dataset Summary
The dataset was compiled from user comments from e-commerce sites. It consists of 53,000 validations, 53,000 tests and 160600 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
#### turkish-dataset-v1
- **Size of downloaded dataset files:** 125.5 MB
- **Size of the generated dataset:** 125.5 MB
### Data Fields
The data fields are the same among all splits.
#### turkish-dataset-v-v1
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0).
### Data Splits
| |train |validation|test |
|----|--------:|---------:|---------:|
|Data| 160600 | 53000| 53000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset. | [
-0.6205927133560181,
-0.6030187010765076,
-0.18347890675067902,
0.3218737542629242,
-0.41685736179351807,
-0.1380680501461029,
-0.4983786940574646,
-0.4146246612071991,
0.36577755212783813,
0.5226186513900757,
-0.6226016283035278,
-0.9534525275230408,
-0.7911719679832458,
0.360583424568176... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/n2c2_2018_track1 | bigbio | 2022-12-22T15:45:59Z | 67 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:45:59Z | 2022-11-13T22:10:45.000Z | 2022-11-13T22:10:45 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2018 Selection Criteria
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- TEXT_CLASSIFICATION
---
# Dataset Card for n2c2 2018 Selection Criteria
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
Track 1 of the 2018 National NLP Clinical Challenges shared tasks focused
on identifying which patients in a corpus of longitudinal medical records
meet and do not meet identified selection criteria.
This shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet
a set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (
“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),
temporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major
diabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical
narratives from the 2014 i2b2/UTHealth shared task 4.
The final selected 13 selection criteria are as follows:
1. DRUG-ABUSE: Drug abuse, current or past
2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits
3. ENGLISH: Patient must speak English
4. MAKES-DECISIONS: Patient must make their own medical decisions
5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine
resection, or small bowel obstruction.
6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of
this annotation, we define “major complication” (as opposed to “minor complication”)
as any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:
a. Amputation
b. Kidney damage
c. Skin conditions
d. Retinopathy
e. nephropathy
f. neuropathy
7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).
For the purposes of this annotation, we define “advanced” as having 2 or more of the following:
a. Taking 2 or more medications to treat CAD
b. History of myocardial infarction (MI)
c. Currently experiencing angina
d. Ischemia, past or present
8. MI-6MOS: MI in the past 6 months
9. KETO-1YR: Diagnosis of ketoacidosis in the past year
10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months
11. ASP-FOR-MI: Use of aspirin to prevent MI
12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%
13. CREATININE: Serum creatinine > upper limit of normal
The training consists of 202 patient records with document-level annotations, 10 records
with textual spans indicating annotator’s evidence for their annotations while test set contains 86.
Note:
* The inter-annotator average agreement is 84.9%
* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.
However, author did a simple script based validation to check if any of the tags contained any text
in any of the training set and they do not, which confirms that atleast train and test do not
have any evidence tagged alongside corresponding tags.
## Citation Information
```
@article{DBLP:journals/jamia/StubbsFSHU19,
author = {
Amber Stubbs and
Michele Filannino and
Ergin Soysal and
Samuel Henry and
Ozlem Uzuner
},
title = {Cohort selection for clinical trials: n2c2 2018 shared task track 1},
journal = {J. Am. Medical Informatics Assoc.},
volume = {26},
number = {11},
pages = {1163--1171},
year = {2019},
url = {https://doi.org/10.1093/jamia/ocz163},
doi = {10.1093/jamia/ocz163},
timestamp = {Mon, 15 Jun 2020 16:56:11 +0200},
biburl = {https://dblp.org/rec/journals/jamia/StubbsFSHU19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| [
-0.335426390171051,
-0.619536817073822,
0.41565340757369995,
0.12567083537578583,
-0.20237040519714355,
0.15473710000514984,
-0.04835844039916992,
-0.704559862613678,
0.43046852946281433,
0.5797688364982605,
-0.2917814254760742,
-0.6950289011001587,
-0.8954244256019592,
0.15144407749176025... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teelinsan/camoscio | teelinsan | 2023-04-02T20:18:52Z | 67 | 1 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:it",
"license:openrail",
"llama",
"instruction-tuning",
"region:us"
] | 2023-04-02T20:18:52Z | 2023-04-02T20:12:37.000Z | 2023-04-02T20:12:37 | ---
license: openrail
task_categories:
- conversational
language:
- it
tags:
- llama
- instruction-tuning
size_categories:
- 10K<n<100K
---
# Camoscio instruction-tuning dataset
This repository contains the dataset used to train [Camoscio](https://huggingface.co/teelinsan/camoscio-7b-llama).
This dataset is an Italian translation with ChatGPT of the [Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca).
Please refer to the [Camoscio repo](https://github.com/teelinsan/camoscio) for more info.
| [
-0.39629533886909485,
-0.25793296098709106,
0.14604514837265015,
0.15311859548091888,
-0.7005119323730469,
0.04227418452501297,
0.06515903025865555,
-0.25396019220352173,
0.33964863419532776,
0.534730076789856,
-1.052215814590454,
-0.7075017690658569,
-0.5482035875320435,
-0.15740011632442... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BelleGroup/generated_chat_0.4M | BelleGroup | 2023-04-08T03:55:12Z | 67 | 43 | null | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | 2023-04-08T03:55:12Z | 2023-04-08T03:36:11.000Z | 2023-04-08T03:36:11 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
# Generated_Chat_0.4M
## 内容
包含约40万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的个性化角色对话数据,包含角色介绍。
注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。
## 样例
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# Generated_Chat_0.4M
## Contents
Includes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. | [
-0.5331553220748901,
-0.8393997550010681,
0.21903352439403534,
0.524294376373291,
-0.3963807225227356,
-0.3056497275829315,
-0.011925054714083672,
-0.48035910725593567,
0.5072916150093079,
0.704342782497406,
-0.7559218406677246,
-0.8779782652854919,
-0.6913372278213501,
0.06504296511411667... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nahrawy/FAID-Depth-ControlNet | Nahrawy | 2023-05-06T18:28:28Z | 67 | 0 | null | [
"region:us"
] | 2023-05-06T18:28:28Z | 2023-04-29T13:28:14.000Z | 2023-04-29T13:28:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: depth_map
dtype: image
- name: scene
dtype: string
- name: caption
dtype: string
- name: state
dtype: string
splits:
- name: train
num_bytes: 11835627985.25
num_examples: 5550
download_size: 12139477164
dataset_size: 11835627985.25
---
# A Dataset of Flash and Ambient Illumination Pairs from the Crowd
This is a version of the [A Dataset of Flash and Ambient Illumination Pairs from the Crowd](http://yaksoy.github.io/flashambient/) dataset equipped for training ControlNet using depth maps conditioning.
The dataset includes 2775 pairs of flash light and ambient light images. It includes images of people, shelves, plants, toys, rooms and objects.
Captions were generated using the [BLIP-2, Flan T5-xxl](https://huggingface.co/Salesforce/blip2-flan-t5-xxl) model.
Depth maps were generated using the [GLPN fine-tuned on NYUv2 ](https://huggingface.co/vinvino02/glpn-nyu) model.
## Examples

## Disclaimer
I do not own any of this data.
| [
-0.4016498625278473,
-0.2960602641105652,
0.13198058307170868,
0.466913104057312,
-0.3405975103378296,
0.05354185402393341,
0.155287504196167,
-0.691366970539093,
0.3189442455768585,
0.48169246315956116,
-0.5148860216140747,
-0.2881637513637543,
-0.20461122691631317,
-0.058094654232263565,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cdminix/libritts-aligned | cdminix | 2023-10-11T19:46:28Z | 67 | 4 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"speech",
"audio",
"automatic-speech-recognition",
"text-to-speech",
"arxiv:1904.02882",
"arxiv:2211.16049",
"region:us"
] | 2023-10-11T19:46:28Z | 2023-05-14T10:29:46.000Z | 2023-05-14T10:29:46 | ---
pretty_name: LibriTTS Corpus with Forced Alignments
annotations_creators:
- crowdsourced
language: en
tags:
- speech
- audio
- automatic-speech-recognition
- text-to-speech
license:
- cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
extra_gated_prompt: "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"
---
> There is also an identical dataset for the new libritts-r dataset at [cdminix/libritts-r-aligned](https://huggingface.co/datasets/cdminix/libritts-r-aligned)
# Dataset Card for LibriTTS with Forced Alignments (and Measures)
UPDATE: The preprocessed alignments are now in this repository, so montreal forced aligner does not have to run locally.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS please cite the following papers:
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049) | [
-0.26794394850730896,
-0.34933632612228394,
0.04527062922716141,
-0.04565834626555443,
-0.07582250237464905,
-0.03914184495806694,
-0.3166395425796509,
-0.16839338839054108,
0.31466761231422424,
0.307718425989151,
-0.6678676605224609,
-0.588403582572937,
-0.19299544394016266,
-0.1570995301... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yongchanskii/youtube-data-for-developers | yongchanskii | 2023-08-22T17:25:33Z | 67 | 1 | null | [
"region:us"
] | 2023-08-22T17:25:33Z | 2023-08-22T17:14:20.000Z | 2023-08-22T17:14:20 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 3663423940.287
num_examples: 8389
- name: test
num_bytes: 417482475.0
num_examples: 933
download_size: 4039879845
dataset_size: 4080906415.287
---
# Dataset Card for "youtube-for-developers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7912440896034241,
-0.3997674882411957,
0.07163651287555695,
0.36902859807014465,
-0.07434588670730591,
0.21146506071090698,
-0.014208506792783737,
0.2301807552576065,
0.8985151648521423,
0.44269314408302307,
-1.0075945854187012,
-0.6873770952224731,
-0.5880352258682251,
-0.3636859655380... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-mawps | MU-NLPC | 2023-10-30T15:55:30Z | 67 | 0 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] | 2023-10-30T15:55:30Z | 2023-09-08T21:19:20.000Z | 2023-09-08T21:19:20 | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: expression
dtype: string
splits:
- name: train
num_bytes: 298347
num_examples: 1089
- name: validation
num_bytes: 285321
num_examples: 1040
- name: test
num_bytes: 142648
num_examples: 520
download_size: 0
dataset_size: 726316
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: expression
dtype: string
splits:
- name: train
num_bytes: 1000546
num_examples: 3636
- name: test
num_bytes: 142648
num_examples: 520
- name: validation
num_bytes: 285321
num_examples: 1040
download_size: 128730
dataset_size: 1428515
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: test
path: original-splits/test-*
- split: validation
path: original-splits/validation-*
---
# Dataset Card for Calc-MAWPS
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://huggingface.co/datasets/omarxadel/MaWPS-ar>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
We provide 2 variants of the dataset. In the first one, the data splits correspond to the original one and can be loaded using:
```python
datasets.load_dataset("MU-NLPC/calc-mawps", "original-splits")
```
The second one is filtered to prevent data leaks (overly similar examples in train and test/val splits) in between and across datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically, we filtered out around 2,500 near-duplicates from the train set that were similar to some instances in the MAWPS val and test splits and ASDiv-A test split. You can load this variant via:
```python
datasets.load_dataset("MU-NLPC/calc-mawps")
```
## Attributes:
- **id**: id of the example
- **question**: problem description in English
- **question_arabic**: problem description in Arabic
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as `result` but converted to a float
- **equation**: an equation that needs to be solved for `x` to obtain the result. Usually in the form of "x = ..." but not always.
- **expression**: arithmetic expression derived from `equation` that solves it for `x`
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original MAWPS dataset**](http://lang.ee.washington.edu/MAWPS)
- [**MAWPS dataset variant in Arabic**](https://huggingface.co/datasets/omarxadel/MaWPS-ar)
- [**original MAWPS paper**](https://aclanthology.org/N16-1136/)
- [**original MAWPS repo**](https://github.com/sroy9/mawps)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of the dataset in research, please cite the original [MAWPS paper](https://aclanthology.org/N16-1136/), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
| [
-0.5019333958625793,
-0.40981122851371765,
0.21785253286361694,
0.07092220336198807,
0.051923755556344986,
-0.09921543300151825,
0.12329035252332687,
-0.3341119587421417,
0.3315053880214691,
0.3966071903705597,
-0.6841400861740112,
-0.25404661893844604,
-0.7049089670181274,
0.1368944942951... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlh-ibm/earnings_call | jlh-ibm | 2023-09-15T21:34:39Z | 67 | 1 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"finance",
"region:us"
] | 2023-09-15T21:34:39Z | 2023-09-15T20:25:43.000Z | 2023-09-15T20:25:43 | ---
license: cc0-1.0
task_categories:
- text-classification
language:
- en
tags:
- finance
pretty_name: Earnings Calls Dataset
size_categories:
- 10K<n<100K
dataset_info:
- config_name: stock_prices
features:
- name: date
dtype: date64
- name: open
dtype: float32
- name: high
dtype: float32
- name: low
dtype: float32
- name: close
dtype: float32
- name: adj_close
dtype: float32
- name: volume
dtype: int64
- name: company
dtype: string
splits:
- name: train
num_bytes: 578818
num_examples: 13155
download_size: 290243
dataset_size: 578818
- config_name: transcript-sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: company
dtype: string
- name: date
dtype: date64
- name: para_no
dtype: int32
splits:
- name: train
num_bytes: 7414686
num_examples: 6851
- name: test
num_bytes: 1928515
num_examples: 1693
download_size: 3868059
dataset_size: 9343201
- config_name: transcripts
features:
- name: company
dtype: string
- name: date
dtype: date64
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 9592380
num_examples: 150
- name: test
num_bytes: 2458569
num_examples: 38
download_size: 3577816
dataset_size: 12050949
---
# Dataset Card for Earnings Calls Dataset
## Dataset Description
- **Homepage:** https://dataverse.nl/dataset.xhtml?persistentId=doi:10.34894/TJE0D0
- **Paper:** https://www.preprints.org/manuscript/202102.0424/v1
- **Point of Contact:** [Francesco Lelli](https://francescolelli.info/)
### Dataset Summary
The dataset reports a collection of earnings call transcripts, the related stock prices, and the sector index In terms of volume,
there is a total of 188 transcripts, 11970 stock prices, and 1196 sector index values. Furthermore, all of these data originated
in the period 2016-2020 and are related to the NASDAQ stock market. Furthermore, the data collection was made possible by Yahoo
Finance and Thomson Reuters Eikon. Specifically, Yahoo Finance enabled the search for stock values and Thomson Reuters Eikon
provided the earnings call transcripts. Lastly, the dataset can be used as a benchmark for the evaluation of several NLP techniques
to understand their potential for financial applications. Moreover, it is also possible to expand the dataset by extending the period
in which the data originated following a similar procedure.
### Citation Information
```bibtex
@data{TJE0D0_2021,
author = {Roozen, Dexter and Lelli, Francesco},
publisher = {DataverseNL},
title = {{Stock Values and Earnings Call Transcripts: a Sentiment Analysis Dataset}},
year = {2021},
version = {V1},
doi = {10.34894/TJE0D0},
url = {https://doi.org/10.34894/TJE0D0}
}
```
| [
0.07257643342018127,
-0.4212290048599243,
0.11060873419046402,
0.1967032551765442,
-0.44190773367881775,
0.25062209367752075,
-0.10700296610593796,
-0.5156075954437256,
0.7465356588363647,
0.19823648035526276,
-0.7247754335403442,
-0.7812236547470093,
-0.377558171749115,
-0.020337717607617... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/smsa | SEACrowd | 2023-09-26T12:33:48Z | 67 | 0 | null | [
"language:ind",
"sentiment-analysis",
"region:us"
] | 2023-09-26T12:33:48Z | 2023-09-26T11:31:18.000Z | 2023-09-26T11:31:18 | ---
tags:
- sentiment-analysis
language:
- ind
---
# smsa
SmSA is a sentence-level sentiment analysis dataset (Purwarianti and Crisdayanti, 2019) is a collection of comments and reviews
in Indonesian obtained from multiple online platforms. The text was crawled and then annotated by several Indonesian linguists
to construct this dataset. There are three possible sentiments on the SmSA dataset: positive, negative, and neutral
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@INPROCEEDINGS{8904199,
author={Purwarianti, Ayu and Crisdayanti, Ida Ayu Putu Ari},
booktitle={2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA)},
title={Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector},
year={2019},
pages={1-5},
doi={10.1109/ICAICTA.2019.8904199}
}
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Wilie, Bryan and Vincentio, Karissa and Winata, Genta Indra and Cahyawijaya, Samuel and Li, Xiaohong and Lim, Zhi Yuan and Soleman, Sidik and Mahendra, Rahmad and Fung, Pascale and Bahar, Syafri and others},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
pages={843--857},
year={2020}
}
```
## License
Creative Commons Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.6015718579292297,
-0.635109007358551,
0.07471593469381332,
0.7683966755867004,
-0.6727303862571716,
-0.21184267103672028,
-0.2722903788089752,
-0.4321541488170624,
0.5596944093704224,
0.5630709528923035,
-0.48217087984085083,
-0.6517030000686646,
-0.5413710474967957,
0.669379711151123,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexrs/alpaca-cleaned-5-clusters | alexrs | 2023-10-16T14:42:10Z | 67 | 0 | null | [
"region:us"
] | 2023-10-16T14:42:10Z | 2023-10-16T14:42:06.000Z | 2023-10-16T14:42:06 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 40490946
num_examples: 51760
download_size: 24177437
dataset_size: 40490946
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-cleaned-5-clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8259124755859375,
-0.263550728559494,
0.3772999048233032,
0.26019975543022156,
-0.35997000336647034,
-0.1047220528125763,
0.32164087891578674,
-0.2979476749897003,
1.0157274007797241,
0.5690087080001831,
-0.8680462837219238,
-0.9993091821670532,
-0.57558673620224,
-0.0753532350063324,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brunnolou/swiss-code-of-obligations | brunnolou | 2023-11-09T18:37:10Z | 67 | 0 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"language:de",
"license:apache-2.0",
"legal",
"region:us"
] | 2023-11-09T18:37:10Z | 2023-10-17T15:37:22.000Z | 2023-10-17T15:37:22 | ---
license: apache-2.0
language:
- en
- de
tags:
- legal
pretty_name: Swiss Code of Obligations
size_categories:
- 1K<n<10K
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: civil_code_de_paraphrase_multilingual
path: swiss-civil-code-de-paraphrase-multilingual-mpnet-base-v2.jsonl
- split: code_of_obligations_en_gte
path: swiss-code-of-obligations-en-gte-small.jsonl
- split: code_of_obligations_en_paraphrase_multilingual
path: swiss-code-of-obligations-en-paraphrase-multilingual-mpnet-base-v2.jsonl
---
# Swiss Code of Obligations (OR) and Swiss Civil Code
#### (Part Five: The Code of Obligations) of 30 March 1911 (Status as of 1 September 2023)
Files generated from the Swiss [publication platform for federal law](https://www.fedlex.admin.ch/en/home)
[Swiss Code of Obligations](https://www.fedlex.admin.ch/eli/cc/27/317_321_377/en)
### Format
Each article has the following type definition:
## With vector embeddings by [Xenova/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/Xenova/paraphrase-multilingual-mpnet-base-v2)
- swiss-civil-code-de-paraphrase-multilingual-mpnet-base-v2.jsonl
- swiss-code-of-obligations-en-paraphrase-multilingual-mpnet-base-v2.jsonl
## With vector embeddings by [Xenova/gte-small](https://huggingface.co/Xenova/gte-small)
- swiss-code-of-obligations-en-gte-small.jsonl
```ts
{
headings: string[]
article: string
link: string
content: string
vector: number[]
}
```
You can also find the original HTML where the data was extracted from.
- [html](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations.html)
# [Qdrant Vector Database](https://qdrant.tech/?gad_source=1&gclid=Cj0KCQiAgK2qBhCHARIsAGACuzkk-MhJWFZdKbwre95q-otN_mlcz4xcYH1aqTm8fVP0TRRiFkKbJ1QaAp27EALw_wcB)
### With vector embeddings by [Xenova/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/Xenova/paraphrase-multilingual-mpnet-base-v2)
- [swiss-civil-code-de-paraphrase-multilingual-mpnet-base-v2.snapshot.zip](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-civil-code-de-paraphrase-multilingual-mpnet-base-v2.snapshot.zip)
- [swiss-code-of-obligations-en-paraphrase-multilingual-mpnet-base-v2.snapshot.zip](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-en-paraphrase-multilingual-mpnet-base-v2.snapshot.zip)
## With vector embeddings by [Xenova/gte-small](https://huggingface.co/Xenova/gte-small)
- [Snapshot - Qdrant verstion v1.6.1 (zip)](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-articles-gte-small-2023-10-18-12-13-25_qdrant-v1-6-1.snapshot.zip)
### 💾 Setup Qdrant Vector Database
1. Open the Qdrant dashboard console <http://localhost:6333/dashboard#/console>
1. Create a new collection running this:
> Vector size for `gte-small` is **`384`**. For `paraphrase-multilingual-mpnet-base-v2` is **`768`**.
```curl
PUT collections/COLLECTION_NAME
{
"vectors": {
"size": 384,
"distance": "Cosine"
}
}
```
1. Download the [snapshot file](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-articles-gte-small-2023-10-18-12-13-25_qdrant-v1-6-1.snapshot.zip)
1. Unzip the file using the terminal (⚠️ **_not with Finder on Mac_** ⚠️) with `unzip <file_name>`
1. Upload the file using the following command. Adapt the fields accordingly and run it from the same directory, as where your snapshot lies
```shell
curl -X POST 'http://localhost:6333/collections/swiss-or/snapshots/upload' \
-H 'Content-Type:multipart/form-data' \
-F 'snapshot=@swiss-code-of-obligations-articles-gte-small-2023-10-18-12-13-25.snapshot'
```
<img src="https://cdn-uploads.huggingface.co/production/uploads/65256343a9f5b404762da984/LgxeBf0Bu_IkFtM3niWfq.png" width=480 /> | [
-0.48846888542175293,
-0.11388547718524933,
0.49195393919944763,
0.31726887822151184,
-0.40831294655799866,
0.06750516593456268,
0.19299660623073578,
-0.18338163197040558,
0.43101754784584045,
0.5386882424354553,
-0.5501182079315186,
-0.6400082111358643,
-0.47758030891418457,
0.26325193047... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LosHuesitos9-9/Huesitos | LosHuesitos9-9 | 2023-11-04T19:08:54Z | 67 | 1 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:es",
"license:cc",
"rf100",
"medical",
"code",
"region:us"
] | 2023-11-04T19:08:54Z | 2023-10-23T14:33:42.000Z | 2023-10-23T14:33:42 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
- es
license:
- cc
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: Huesitos
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': bone-fracture
'1': angle
'2': fracture
'3': line
'4': messed_up_angle
splits:
- name: train
num_bytes: 150839322.0
num_examples: 626
- name: validation
num_bytes: 1278386.0
num_examples: 44
- name: test
num_bytes: 2530151.0
num_examples: 88
download_size: 71039842
dataset_size: 154647859.0
tags:
- rf100
- medical
- code
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
## Licensing Information
See original homepage https://universe.roboflow.com/object-detection/bone-fracture-7fylg
### Citation Information
```
@misc{ bone-fracture-7fylg,
title = { bone fracture 7fylg Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/bone-fracture-7fylg } },
url = { https://universe.roboflow.com/object-detection/bone-fracture-7fylg },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
contributions dataset = {[@mariosasko](https://github.com/mariosasko)}
}"
``` | [
-0.42295026779174805,
-0.6967936754226685,
0.3554909825325012,
-0.013594054616987705,
-0.487867534160614,
-0.3171183168888092,
0.13760824501514435,
-0.5098237991333008,
0.3092033565044403,
0.40682631731033325,
-0.5170034170150757,
-1.048882246017456,
-0.4806983470916748,
0.2487727105617523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
automated-research-group/phi-winogrande-results | automated-research-group | 2023-10-30T01:03:01Z | 67 | 0 | null | [
"region:us"
] | 2023-10-30T01:03:01Z | 2023-10-28T13:25:32.000Z | 2023-10-28T13:25:32 | ---
dataset_info:
- config_name: '{''do_sample''=False, ''beams''=10}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=False, ''beams''=1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=False, ''beams''=5}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 42573
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 0
dataset_size: 47503
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
features:
- name: id
dtype: string
- name: prediction
dtype: string
- name: likelihood
dtype: float32
- name: perplexity
dtype: float32
- name: accuracy
dtype: bool
splits:
- name: train
num_bytes: 47503
num_examples: 1267
download_size: 29469
dataset_size: 47503
configs:
- config_name: '{''do_sample''=False, ''beams''=10}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=10}/train-*'
- config_name: '{''do_sample''=False, ''beams''=1}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=1}/train-*'
- config_name: '{''do_sample''=False, ''beams''=5}'
data_files:
- split: train
path: '{''do_sample''=False, ''beams''=5}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=1, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.9, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=0.95, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''beams''=5, ''temperature''=1.0, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.9, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=100,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=0.95, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=1, ''top_k''=10000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=100, ''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=1000,
''top_p''=0.2}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.05}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.1}/train-*'
- config_name: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}'
data_files:
- split: train
path: '{''do_sample''=True, ''temperature''=1.0, ''beams''=5, ''top_k''=10000,
''top_p''=0.2}/train-*'
---
# Dataset Card for "phi-winogrande-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.46147608757019043,
-0.16152769327163696,
0.20000076293945312,
0.1740904599428177,
-0.3544479310512543,
-0.14356273412704468,
0.27762019634246826,
-0.25212958455085754,
1.022336483001709,
0.33437854051589966,
-0.6707325577735901,
-0.7234161496162415,
-0.7288936376571655,
-0.5588348507881... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HoangHa/CulturaX001part | HoangHa | 2023-11-24T07:34:59Z | 67 | 0 | null | [
"region:us"
] | 2023-11-24T07:34:59Z | 2023-10-28T13:58:47.000Z | 2023-10-28T13:58:47 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_easy_2_grader_first_1.0e | atmallen | 2023-11-16T18:27:13Z | 67 | 0 | null | [
"region:us"
] | 2023-11-16T18:27:13Z | 2023-11-16T03:19:21.000Z | 2023-11-16T03:19:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 10359818.0
num_examples: 117117
- name: validation
num_bytes: 1000602.0
num_examples: 11279
- name: test
num_bytes: 993048.0
num_examples: 11186
download_size: 2659401
dataset_size: 12353468.0
---
# Dataset Card for "qm_alice_easy_2_grader_first_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3344939053058624,
-0.3498890697956085,
0.16020184755325317,
0.182724729180336,
-0.15057335793972015,
-0.13612152636051178,
0.6436612010002136,
0.08501214534044266,
0.46244120597839355,
0.27040308713912964,
-0.7786396145820618,
-0.8800020217895508,
-0.7521318197250366,
-0.396703749895095... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/poems-es | hackathon-pln-es | 2022-03-27T18:39:08Z | 66 | 4 | null | [
"license:wtfpl",
"region:us"
] | 2022-03-27T18:39:08Z | 2022-03-21T18:36:23.000Z | 2022-03-21T18:36:23 | ---
license: wtfpl
---
Dataset descargado de la página kaggle.com.
El archivo original contenía información en inglés y posteriormente fue traducida para su uso.
El dataset contiene las columnas:
- Autor: corresponde al autor del poema.
- Contenido: contiene todo el poema.
- Nombre del poema: contiene el título del poema.
- Años: corresponde al tiempo en que fue hecho el poema.
- Tipo: contiene el tipo que pertenece el poema. | [
-0.2876763343811035,
-0.30542224645614624,
-0.02739989571273327,
0.4783092439174652,
-0.5683945417404175,
-0.0017144366865977645,
-0.14563897252082825,
-0.36388689279556274,
0.4929322302341461,
0.8113849759101868,
-0.8375982046127319,
-0.8564581274986267,
-0.6421507596969604,
0.46702131628... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/neutral-es | hackathon-pln-es | 2022-10-25T10:20:48Z | 66 | 6 | null | [
"task_categories:text2text-generation",
"task_categories:translation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | 2022-10-25T10:20:48Z | 2022-03-31T18:02:00.000Z | 2022-03-31T18:02:00 | ---
language:
- es
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: neutralES
---
# Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 | [
-0.4260258674621582,
-0.43915483355522156,
0.1744011640548706,
0.5549672245979309,
-0.16121722757816315,
-0.13464109599590302,
0.03898850455880165,
-0.3005357086658478,
0.30029329657554626,
0.4022575914859772,
-0.5253346562385559,
-0.7899512052536011,
-0.2539306581020355,
0.537784695625305... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigscience-data/roots_en_the_pile_uspto | bigscience-data | 2022-12-12T11:03:28Z | 66 | 1 | null | [
"language:en",
"license:mit",
"region:us"
] | 2022-12-12T11:03:28Z | 2022-05-18T09:09:05.000Z | 2022-05-18T09:09:05 | ---
language: en
license: mit
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_the_pile_uspto
# the_pile_uspto
- Dataset uid: `the_pile_uspto`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.5358 % of total
- 2.9032 % of en
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| [
-0.5995587706565857,
-0.5097663402557373,
0.2784240245819092,
0.14074741303920746,
-0.5371972918510437,
-0.001992111327126622,
0.07125589996576309,
0.06527884304523468,
0.6442409157752991,
0.8114985823631287,
-0.4424344599246979,
-0.7296662330627441,
-0.4987562894821167,
0.1671366691589355... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeIR/trec-covid-generated-queries | BeIR | 2022-10-23T06:13:36Z | 66 | 0 | beir | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T06:13:36Z | 2022-06-17T12:59:43.000Z | 2022-06-17T12:59:43 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | [
-0.5227212905883789,
-0.5249219536781311,
0.14435674250125885,
0.04820423573255539,
0.055916160345077515,
0.0011022627586498857,
-0.1081070527434349,
-0.24874727427959442,
0.28598034381866455,
0.07840226590633392,
-0.45233607292175293,
-0.7186435461044312,
-0.347678542137146,
0.20300328731... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pinecone/image-set | pinecone | 2022-07-07T15:33:29Z | 66 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-07-07T15:33:29Z | 2022-07-06T17:02:00.000Z | 2022-07-06T17:02:00 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mitclinicalml/clinical-ie | mitclinicalml | 2022-12-01T16:34:20Z | 66 | 22 | null | [
"arxiv:2205.12689",
"arxiv:2010.02010",
"arxiv:1806.04185",
"region:us"
] | 2022-12-01T16:34:20Z | 2022-10-21T23:00:31.000Z | 2022-10-21T23:00:31 | ---
{}
---
Below, we provide access to the datasets used in and created for the EMNLP 2022 paper [Large Language Models are Few-Shot Clinical Information Extractors](https://arxiv.org/abs/2205.12689).
# Task #1: Clinical Sense Disambiguation
For Task #1, we use the original annotations from the [Clinical Acronym Sense Inventory (CASI) dataset](https://conservancy.umn.edu/handle/11299/137703), described in [their paper](https://academic.oup.com/jamia/article/21/2/299/723657).
As is common, due to noisiness in the label set, we do not evaluate on the entire dataset, but only on a cleaner subset. For consistency, we use the subset defined by the filtering used in ["Zero-Shot Clinical Acronym Expansion
via Latent Meaning Cells"](https://arxiv.org/pdf/2010.02010.pdf). This results in a subset of 18,164 examples and 41 acronyms for evaluation.
We additionally use the MIMIC Reverse Substitution dataset, as created in that same paper, with further instructions available in [their repository](https://github.com/griff4692/LMC).
# Task #2: Biomedical Evidence Extraction
For Task #2, we use the out-of-the-box high-level labels from the [PICO dataset](https://arxiv.org/abs/1806.04185) available publicly in the repository [here](https://github.com/bepnye/EBM-NLP).
# Task #3: Coreference Resolution
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. Each example is labeled with a singular pronoun and that pronoun's corresponding noun phrase antecedent (or antecedents).
The antecedent was annotated as the entire noun phrase (barring any dependent clauses); in cases where multiple equally valid antecedents were available, all were labeled (empirically, up to 2).
For the purposes of evaluation, we chose the antecedent with the highest overlap to each model’s output.
To ensure nontrivial examples, the annotators excluded all examples of personal pronouns (e.g. “he”, “she”) if another person (and possible antecedent) had not yet been mentioned in the snippet.
Examples were skipped in annotation if the pronoun did not have an antecedent within the provided text snippet.
# Task #4: Medication Status Extraction
For Task #3, we annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test. We wanted to create a dataset of challenging examples containing a changeover in treatment. From a sample, only ∼5% of CASI snippets contained such examples. To increase the density of these examples, speeding up annotation, clinical notes were filtered with the following search terms: discont, adverse, side effect, switch, and dosage, leading to 1445 snippets. We excluded snippets that were purely medication lists, requiring at least some narrative part to be present.
For each example, the annotators first extracted all medications. Guidelines excluded medication categories (e.g. “ACE-inhibitor”) if they referred to more specific drug names mentioned elsewhere (even if partially cut off in the snippet). For instance, only the antibiotic Levaquin was labeled in: “It is
probably reasonable to treat with antibiotics [...]. I would agree with Levaquin alone [...]”. Guidelines also excluded electrolytes and intravenous fluids as well as route and dosage information. In a second step, medications were assigned to one of three categories: active, discontinued, and neither.
Discontinued medications also contain medications that are temporarily on hold. The category neither was assigned to all remaining medications (e.g. allergies, potential medications).
The medication lists for each example were serialized as a json.
# Task #5: Medication Attribute Extraction
For Task #5, we again annotated 105 snippets from the [CASI dataset](https://conservancy.umn.edu/handle/11299/137703), 5 for development and 100 for test.
Annotation guideline were adopted from the 2009 i2b2 medication extraction challenge (Uzuner et al., 2010) with slight modifications.
We allowed medication attributes to have multiple spans and grouped together different mentions of the the same drug (e.g. “Tylenol” and “Tylenol PM”) for the purpose of relation extraction.
The annotation list for each example was serialized as a json.
# Citations
When using our annotations for tasks #3-5, please cite our paper, as well as the papers from which the underlying text originated.
```
@inproceedings{agrawal2022large,
title={Large Language Models are Few-Shot Clinical Information Extractors},
author={Monica Agrawal and Stefan Hegselmann and Hunter Lang and Yoon Kim and David Sontag},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
year={2022},
url_Paper = {https://arxiv.org/pdf/2205.12689.pdf}
}
```
```
@article{moon2014sense,
title={A sense inventory for clinical abbreviations and acronyms created using clinical notes and medical dictionary resources},
author={Moon, Sungrim and Pakhomov, Serguei and Liu, Nathan and Ryan, James O and Melton, Genevieve B},
journal={Journal of the American Medical Informatics Association},
volume={21},
number={2},
pages={299--307},
year={2014},
publisher={BMJ Publishing Group BMA House, Tavistock Square, London, WC1H 9JR}
}
```
# Licensing
The annotations added by our team fall under the MIT license, but the CASI dataset itself is subject to its own licensing.
---
license: other
---
| [
-0.10263713449239731,
-0.5600908994674683,
0.718034029006958,
0.024561500176787376,
-0.17171062529087067,
-0.4190724492073059,
-0.14588870108127594,
-0.604475200176239,
0.38846245408058167,
0.581152081489563,
-0.37983036041259766,
-0.6627135872840881,
-0.8347877264022827,
0.447414577007293... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2011_epi | bigbio | 2022-12-22T15:43:49Z | 66 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:49Z | 2022-11-13T22:06:49.000Z | 2022-11-13T22:06:49 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2011 EPI
homepage: https://github.com/openbiocorpora/bionlp-st-2011-epi
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2011 EPI
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-epi
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
The dataset of the Epigenetics and Post-translational Modifications (EPI) task
of BioNLP Shared Task 2011.
## Citation Information
```
@inproceedings{ohta-etal-2011-overview,
title = "Overview of the Epigenetics and Post-translational
Modifications ({EPI}) task of {B}io{NLP} Shared Task 2011",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop",
month = jun,
year = "2011",
address = "Portland, Oregon, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W11-1803",
pages = "16--25",
}
```
| [
-0.2573101818561554,
-0.2431918978691101,
0.24354951083660126,
0.24488142132759094,
-0.26595237851142883,
0.0433415025472641,
-0.4003506302833557,
-0.43706145882606506,
0.604652464389801,
0.275115966796875,
-0.6756767630577087,
-0.8618839383125305,
-0.48853161931037903,
0.36544540524482727... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.