datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
HanxuHU/mmmu_es | ---
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1599588.0
num_examples: 30
download_size: 1535854
dataset_size: 1599588.0
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 119218028.0
num_examples: 30
download_size: 119224565
dataset_size: 119218028.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 722335.0
num_examples: 30
download_size: 727785
dataset_size: 722335.0
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 29934652.0
num_examples: 30
download_size: 29939932
dataset_size: 29934652.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 33481625.0
num_examples: 30
download_size: 29784042
dataset_size: 33481625.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 4126203.0
num_examples: 30
download_size: 4131787
dataset_size: 4126203.0
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 8492641.0
num_examples: 30
download_size: 8496404
dataset_size: 8492641.0
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1519095.0
num_examples: 30
download_size: 1523864
dataset_size: 1519095.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 10883201.0
num_examples: 30
download_size: 10887149
dataset_size: 10883201.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 2072827.0
num_examples: 30
download_size: 2078528
dataset_size: 2072827.0
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 17923311.0
num_examples: 30
download_size: 16227766
dataset_size: 17923311.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 37106511.0
num_examples: 30
download_size: 37089786
dataset_size: 37106511.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1487887.0
num_examples: 30
download_size: 1424495
dataset_size: 1487887.0
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 641792.0
num_examples: 30
download_size: 644598
dataset_size: 641792.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1643060.0
num_examples: 30
download_size: 1647642
dataset_size: 1643060.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1072376.0
num_examples: 30
download_size: 1003864
dataset_size: 1072376.0
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 6672021.0
num_examples: 30
download_size: 6677660
dataset_size: 6672021.0
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 8820363.0
num_examples: 30
download_size: 8430707
dataset_size: 8820363.0
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 14241494.0
num_examples: 30
download_size: 14247070
dataset_size: 14241494.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 3279848.0
num_examples: 30
download_size: 3141143
dataset_size: 3279848.0
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1474022.0
num_examples: 30
download_size: 1361411
dataset_size: 1474022.0
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 2305936.0
num_examples: 30
download_size: 2310698
dataset_size: 2305936.0
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1444976.0
num_examples: 30
download_size: 1448827
dataset_size: 1444976.0
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 875945.0
num_examples: 30
download_size: 876955
dataset_size: 875945.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 9359583.0
num_examples: 30
download_size: 9363806
dataset_size: 9359583.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1656959.0
num_examples: 30
download_size: 1551499
dataset_size: 1656959.0
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1115000.0
num_examples: 30
download_size: 1118014
dataset_size: 1115000.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 4407676.0
num_examples: 30
download_size: 4313042
dataset_size: 4407676.0
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 1510283.0
num_examples: 30
download_size: 1509775
dataset_size: 1510283.0
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: validation
num_bytes: 18455570.0
num_examples: 30
download_size: 18459889
dataset_size: 18455570.0
configs:
- config_name: Accounting
data_files:
- split: validation
path: Accounting/validation-*
- config_name: Agriculture
data_files:
- split: validation
path: Agriculture/validation-*
- config_name: Architecture_and_Engineering
data_files:
- split: validation
path: Architecture_and_Engineering/validation-*
- config_name: Art
data_files:
- split: validation
path: Art/validation-*
- config_name: Art_Theory
data_files:
- split: validation
path: Art_Theory/validation-*
- config_name: Basic_Medical_Science
data_files:
- split: validation
path: Basic_Medical_Science/validation-*
- config_name: Biology
data_files:
- split: validation
path: Biology/validation-*
- config_name: Chemistry
data_files:
- split: validation
path: Chemistry/validation-*
- config_name: Clinical_Medicine
data_files:
- split: validation
path: Clinical_Medicine/validation-*
- config_name: Computer_Science
data_files:
- split: validation
path: Computer_Science/validation-*
- config_name: Design
data_files:
- split: validation
path: Design/validation-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- config_name: Economics
data_files:
- split: validation
path: Economics/validation-*
- config_name: Electronics
data_files:
- split: validation
path: Electronics/validation-*
- config_name: Energy_and_Power
data_files:
- split: validation
path: Energy_and_Power/validation-*
- config_name: Finance
data_files:
- split: validation
path: Finance/validation-*
- config_name: Geography
data_files:
- split: validation
path: Geography/validation-*
- config_name: History
data_files:
- split: validation
path: History/validation-*
- config_name: Literature
data_files:
- split: validation
path: Literature/validation-*
- config_name: Manage
data_files:
- split: validation
path: Manage/validation-*
- config_name: Marketing
data_files:
- split: validation
path: Marketing/validation-*
- config_name: Materials
data_files:
- split: validation
path: Materials/validation-*
- config_name: Math
data_files:
- split: validation
path: Math/validation-*
- config_name: Mechanical_Engineering
data_files:
- split: validation
path: Mechanical_Engineering/validation-*
- config_name: Music
data_files:
- split: validation
path: Music/validation-*
- config_name: Pharmacy
data_files:
- split: validation
path: Pharmacy/validation-*
- config_name: Physics
data_files:
- split: validation
path: Physics/validation-*
- config_name: Psychology
data_files:
- split: validation
path: Psychology/validation-*
- config_name: Public_Health
data_files:
- split: validation
path: Public_Health/validation-*
- config_name: Sociology
data_files:
- split: validation
path: Sociology/validation-*
---
|
atmallen/quirky_bookrating_alice_hard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 96619.75463773188
num_examples: 718
- name: validation
num_bytes: 63544.77
num_examples: 472
- name: test
num_bytes: 60446.35725
num_examples: 447
download_size: 75533
dataset_size: 220610.88188773187
---
# Dataset Card for "quirky_bookrating_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BSC-LT/bsc-dolly-15k-en | ---
dataset_info:
- config_name: annotated
features:
- name: id
dtype: int64
- name: category
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: context
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 11901412
num_examples: 15015
download_size: 7553519
dataset_size: 11901412
- config_name: filtered
features:
- name: id
dtype: int64
- name: category
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: context
dtype: float64
- name: labels
dtype: float64
splits:
- name: train
num_bytes: 4398990
num_examples: 10157
download_size: 2749289
dataset_size: 4398990
configs:
- config_name: annotated
data_files:
- split: train
path: annotated/train-*
- config_name: filtered
data_files:
- split: train
path: filtered/train-*
---
## BSC Dolly 15k EN
Reviewed version from the [Argilla Dolly v2 English version](https://huggingface.co/datasets/argilla/databricks-dolly-15k-curated-multilingual), originally created by [Databricks](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
We provide two subsets: "annotated", where some instances were labelled with potential problems; and "filtered", which only contains the instances without the issues that we observed.
## Annotation process
While analysing the Argilla Dolly v2 English version, we observed the following:
1. Task classification:
- There are three classes with context: 'Closed QA', 'Information Extraction' and 'Summarization'. The rest without context.
- Context is not necessary in all cases and there are instructions that already contain context.
- Incorrect categories (the intention does not always correspond to the category).
-
2. Confusion between "Summarization" and "Open Generative QA" / "Information Extraction" tasks:
- Tasks categorized as "Summarization" have in some cases the intent of "Open Generative QA" / "Information Extraction", and due to their dependency on context, the answer is longer.
3. To note:
- 15,014 examples, half of "QA" type in various formats.
- 70% have no context; when they do, they come from the first part of Wikipedia.
- Many answers are also from Wikipedia.
- Possible improvements in cleaning up text extracted from Wikipedia and handling acronyms.
4. Errors in the dataset:
- Some summaries are longer than the original text.
- Some contexts in "Information Extraction" do not contain the exact information to answer the question asked.
- There are many repeated questions that are kept because the answer is different in each case.
From the previous observations, we performed the following processing:
- Processed "context" column to:
- Remove spellings, citations, or unit conversions inside (parenthesis) and [brackets].
- Removed source webpage links.
- Removed:
- Summary instances where intent is clear & response is longer than context (63)
- Instances where the information is not explicitly mentioned in the context (3)
- Instances with webpage links in the response or instruction (29)
- Exact (instruction/context/response) duplicates (14)
- Instruction/context duplicates (9)
- Instances where instruction is most similar to the response (6)
-
- Changes:
- Some instances in Summarization/Information Extraction/ Closed QA are lacking context after Argilla's curation process. These instances are moved to General QA since they have no longer context and ask about specifics (86).
|
matlok/python-text-copilot-training-instruct-ai-research-2024-02-03 | ---
license:
- other
pretty_name: >-
2024-02-03 - python copilot instructions on how to code using alpaca and yaml
dataset_info:
- config_name: andromeda
splits:
- name: train
- name: test
- config_name: swarms
splits:
- name: train
- name: test
- config_name: swarms_pytorch
splits:
- name: train
- name: test
- config_name: longnet
splits:
- name: train
- name: test
- config_name: zeta
splits:
- name: train
- name: test
configs:
- config_name: andromeda
data_files:
- split: train
path: train/train-0001-andromeda-andromeda_torch.parquet
- split: test
path: test/train-0002-andromeda-tests.parquet
- config_name: swarms
data_files:
- split: train
path: train/train-0004-swarms-swarms.parquet
- split: test
path: test/train-0005-swarms-tests.parquet
- config_name: swarms_pytorch
data_files:
- split: train
path: train/train-0006-swarms-pytorch-swarms_torch.parquet
- split: test
path: test/train-0007-swarms-pytorch-tests.parquet
- config_name: longnet
data_files:
- split: train
path: train/train-0009-longnet-long_net.parquet
- split: test
path: test/train-0010-longnet-tests.parquet
- config_name: zeta
data_files:
- split: train
path: train/train-0011-zeta-zeta.parquet
- split: test
path: test/train-0012-zeta-tests.parquet
size_categories:
- 1M<n<10M
tags:
- python-copilot
- python-coding
- python-architecture
- knowledge-graphs
- multimodal
- text-image-audio
- fine-tuning
- training
- question-answering
- image-knowledge-graph
- alpaca
- mp3
- png
- text
- instruct
- coding
- task
- prompt
- response
- yaml
# supported task_categories
# text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, conversational, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, other
task_categories:
- text-generation
- question-answering
# supported task_ids
# acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-generation, dialogue-modeling, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering
task_ids:
- parsing
---
## Python Copilot Instructions on How to Code using Alpaca and Yaml
Training and test datasets for building coding multimodal models that understand how to use the open source GitHub projects for the **Agora Open Source AI Research Lab**:
- [Agora GitHub Organization](https://github.com/Agora-X)
- [Agora Hugging Face](https://huggingface.co/AgoraX)
This dataset is the 2024-02-03 update for the matlok python copilot datasets. Please refer to the [Multimodal Python Copilot Training Overview](https://huggingface.co/datasets/matlok/multimodal-python-copilot-training-overview) for more details on how to use this dataset.
### Details
Each row contains python code, either a class method or a global function, imported modules, base classes (if any), exceptions (ordered based off the code), returns (ordered based off the code), arguments (ordered based off the code), and more.
- Rows: 1182526
- Size: 2.1 GB
- Data type: instruct
- Format: Introduction on code usage using alpaca and yaml response
- Number of python repos: 1258
### How to use the datasets
#### Load Andromeda Train/Test
```python
from datasets import load_dataset
ds = load_dataset("matlok/python-text-copilot-training-instruct-ai-research-2024-02-03", "andromeda", verification_mode="no_checks")
```
#### Load Swarms Train/Test
```python
from datasets import load_dataset
ds = load_dataset("matlok/python-text-copilot-training-instruct-ai-research-2024-02-03", "swarms", verification_mode="no_checks")
```
#### Load Swarms Pytorch Train/Test
```python
from datasets import load_dataset
ds = load_dataset("matlok/python-text-copilot-training-instruct-ai-research-2024-02-03", "swarms_pytorch", verification_mode="no_checks")
```
#### Load LongNet Train/Test
```python
from datasets import load_dataset
ds = load_dataset("matlok/python-text-copilot-training-instruct-ai-research-2024-02-03", "longnet", verification_mode="no_checks")
```
# Load Zeta Train/Test
```python
from datasets import load_dataset
ds = load_dataset("matlok/python-text-copilot-training-instruct-ai-research-2024-02-03", "zeta", verification_mode="no_checks")
```
### Schema
The instruction alpaca text with yaml response is in the **desc** column:
```json
{
"active": "bool",
"args": "string",
"args_len": "float64",
"audio_file": "string",
"audio_path": "string",
"class_bases": "string",
"class_name": "string",
"code": "string",
"code_len": "float64",
"desc": "string",
"desc_docstr": "string",
"desc_docstr_len": "float64",
"desc_len": "int64",
"docstr": "string",
"docstr_len": "int64",
"file_path": "string",
"file_type": "string",
"function_names": "string",
"gen_bytes": "int64",
"gen_data_type": "string",
"gen_mode": "string",
"gen_size": "int64",
"gen_valid": "bool",
"height": "int64",
"image_file": "string",
"image_path": "string",
"method_names": "string",
"name": "string",
"num_all_bases": "int64",
"num_bases": "int64",
"num_classes": "int64",
"num_functions": "float64",
"num_imports": "int64",
"num_methods": "float64",
"prompts": "string",
"raises": "string",
"raises_len": "float64",
"recsize": "int64",
"repo": "string",
"returns": "string",
"returns_len": "float64",
"size": "int64",
"src_object": "string",
"total_objects": "int64",
"usage": "string",
"usages": "string",
"width": "int64"
}
```
|
open-llm-leaderboard/details_aisquared__dlite-v2-124m | ---
pretty_name: Evaluation run of aisquared/dlite-v2-124m
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aisquared/dlite-v2-124m](https://huggingface.co/aisquared/dlite-v2-124m) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aisquared__dlite-v2-124m\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-27T09:27:20.533537](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v2-124m/blob/main/results_2023-10-27T09-27-20.533537.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0050335570469798654,\n\
\ \"em_stderr\": 0.0007247385547751906,\n \"f1\": 0.05289324664429539,\n\
\ \"f1_stderr\": 0.001460860471625635,\n \"acc\": 0.2521704814522494,\n\
\ \"acc_stderr\": 0.007025978032038446\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0050335570469798654,\n \"em_stderr\": 0.0007247385547751906,\n\
\ \"f1\": 0.05289324664429539,\n \"f1_stderr\": 0.001460860471625635\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5043409629044988,\n\
\ \"acc_stderr\": 0.014051956064076892\n }\n}\n```"
repo_url: https://huggingface.co/aisquared/dlite-v2-124m
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|arc:challenge|25_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_27T09_27_20.533537
path:
- '**/details_harness|drop|3_2023-10-27T09-27-20.533537.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-27T09-27-20.533537.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_27T09_27_20.533537
path:
- '**/details_harness|gsm8k|5_2023-10-27T09-27-20.533537.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-27T09-27-20.533537.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hellaswag|10_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T13:53:19.147655.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T13:53:19.147655.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T13:53:19.147655.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_27T09_27_20.533537
path:
- '**/details_harness|winogrande|5_2023-10-27T09-27-20.533537.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-27T09-27-20.533537.parquet'
- config_name: results
data_files:
- split: 2023_07_19T13_53_19.147655
path:
- results_2023-07-19T13:53:19.147655.parquet
- split: 2023_10_27T09_27_20.533537
path:
- results_2023-10-27T09-27-20.533537.parquet
- split: latest
path:
- results_2023-10-27T09-27-20.533537.parquet
---
# Dataset Card for Evaluation run of aisquared/dlite-v2-124m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/aisquared/dlite-v2-124m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [aisquared/dlite-v2-124m](https://huggingface.co/aisquared/dlite-v2-124m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aisquared__dlite-v2-124m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T09:27:20.533537](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v2-124m/blob/main/results_2023-10-27T09-27-20.533537.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0050335570469798654,
"em_stderr": 0.0007247385547751906,
"f1": 0.05289324664429539,
"f1_stderr": 0.001460860471625635,
"acc": 0.2521704814522494,
"acc_stderr": 0.007025978032038446
},
"harness|drop|3": {
"em": 0.0050335570469798654,
"em_stderr": 0.0007247385547751906,
"f1": 0.05289324664429539,
"f1_stderr": 0.001460860471625635
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5043409629044988,
"acc_stderr": 0.014051956064076892
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
almost/test | ---
license: afl-3.0
---
|
zolak/twitter_dataset_50_1713220928 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 167834
num_examples: 425
download_size: 91050
dataset_size: 167834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vic0428/imdb-card-pred-science | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: true_cardinality
dtype: int64
splits:
- name: train
num_bytes: 39344995.2
num_examples: 80000
- name: test
num_bytes: 9836248.8
num_examples: 20000
download_size: 8632280
dataset_size: 49181244.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "imdb-card-pred-science"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Mouli07/ROCO_Chest_Xray_v1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 38241164.86
num_examples: 1735
download_size: 39693229
dataset_size: 38241164.86
---
# Dataset Card for "ROCO_Chest_Xray_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jiaoyang623/ddpm-butterflies-128 | ---
license: apache-2.0
---
|
AIHowto/woman_class_images_better_1832 | ---
license: creativeml-openrail-m
---
|
louisbrulenaudet/code-patrimoine | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- finetuning
- legal
- french law
- droit français
- Code du patrimoine
source_datasets:
- original
pretty_name: Code du patrimoine
task_categories:
- text-generation
- table-question-answering
- summarization
- text-retrieval
- question-answering
- text-classification
size_categories:
- 1K<n<10K
---
# Code du patrimoine, non-instruct (2024-04-15)
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for legal practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Concurrent reading of the LegalKit
To use all the legal data published on LegalKit, you can use this code snippet:
```python
# -*- coding: utf-8 -*-
import concurrent.futures
import os
import datasets
from tqdm.notebook import tqdm
def dataset_loader(
name:str,
streaming:bool=True
) -> datasets.Dataset:
"""
Helper function to load a single dataset in parallel.
Parameters
----------
name : str
Name of the dataset to be loaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
dataset : datasets.Dataset
Loaded dataset object.
Raises
------
Exception
If an error occurs during dataset loading.
"""
try:
return datasets.load_dataset(
name,
split="train",
streaming=streaming
)
except Exception as exc:
logging.error(f"Error loading dataset {name}: {exc}")
return None
def load_datasets(
req:list,
streaming:bool=True
) -> list:
"""
Downloads datasets specified in a list and creates a list of loaded datasets.
Parameters
----------
req : list
A list containing the names of datasets to be downloaded.
streaming : bool, optional
Determines if datasets are streamed. Default is True.
Returns
-------
datasets_list : list
A list containing loaded datasets as per the requested names provided in 'req'.
Raises
------
Exception
If an error occurs during dataset loading or processing.
Examples
--------
>>> datasets = load_datasets(["dataset1", "dataset2"], streaming=False)
"""
datasets_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_dataset = {executor.submit(dataset_loader, name): name for name in req}
for future in tqdm(concurrent.futures.as_completed(future_to_dataset), total=len(req)):
name = future_to_dataset[future]
try:
dataset = future.result()
if dataset:
datasets_list.append(dataset)
except Exception as exc:
logging.error(f"Error processing dataset {name}: {exc}")
return datasets_list
req = [
"louisbrulenaudet/code-artisanat",
"louisbrulenaudet/code-action-sociale-familles",
# ...
]
datasets_list = load_datasets(
req=req,
streaming=True
)
dataset = datasets.concatenate_datasets(
datasets_list
)
```
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `string`, presenting the instruction linked to the element.
- `input`: `string`, signifying the input details for the element.
- `output`: `string`, indicating the output information for the element.
- `start`: `string`, the date of entry into force of the article.
- `expiration`: `string`, the date of expiration of the article.
- `num`: `string`, the id of the article.
We used the following list of instructions for generating the dataset:
```python
instructions = [
"Compose l'intégralité de l'article sous forme écrite.",
"Écris la totalité du contenu de l'article.",
"Formule la totalité du texte présent dans l'article.",
"Produis l'intégralité de l'article en écriture.",
"Développe l'article dans son ensemble par écrit.",
"Génère l'ensemble du texte contenu dans l'article.",
"Formule le contenu intégral de l'article en entier.",
"Rédige la totalité du texte de l'article en entier.",
"Compose l'intégralité du contenu textuel de l'article.",
"Rédige l'ensemble du texte qui constitue l'article.",
"Formule l'article entier dans son contenu écrit.",
"Composez l'intégralité de l'article sous forme écrite.",
"Écrivez la totalité du contenu de l'article.",
"Formulez la totalité du texte présent dans l'article.",
"Développez l'article dans son ensemble par écrit.",
"Générez l'ensemble du texte contenu dans l'article.",
"Formulez le contenu intégral de l'article en entier.",
"Rédigez la totalité du texte de l'article en entier.",
"Composez l'intégralité du contenu textuel de l'article.",
"Écrivez l'article dans son intégralité en termes de texte.",
"Rédigez l'ensemble du texte qui constitue l'article.",
"Formulez l'article entier dans son contenu écrit.",
"Composer l'intégralité de l'article sous forme écrite.",
"Écrire la totalité du contenu de l'article.",
"Formuler la totalité du texte présent dans l'article.",
"Produire l'intégralité de l'article en écriture.",
"Développer l'article dans son ensemble par écrit.",
"Générer l'ensemble du texte contenu dans l'article.",
"Formuler le contenu intégral de l'article en entier.",
"Rédiger la totalité du texte de l'article en entier.",
"Composer l'intégralité du contenu textuel de l'article.",
"Rédiger l'ensemble du texte qui constitue l'article.",
"Formuler l'article entier dans son contenu écrit.",
"Quelles sont les dispositions de l'article ?",
"Quelles dispositions sont incluses dans l'article ?",
"Quelles sont les dispositions énoncées dans l'article ?",
"Quel est le texte intégral de l'article ?",
"Quelle est la lettre de l'article ?"
]
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/af66c1c4 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1330
dataset_size: 186
---
# Dataset Card for "af66c1c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anan-2024/twitter_dataset_1713211678 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 124224
num_examples: 334
download_size: 68932
dataset_size: 124224
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-131000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 653897
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davanstrien/leicester_loaded_annotations_binary | ---
dataset_info:
features:
- name: image
dtype: image
- name: labels
dtype:
class_label:
names:
'0': other
'1': county_trades
splits:
- name: train
num_bytes: 1090143420.0
num_examples: 525
download_size: 0
dataset_size: 1090143420.0
---
# Dataset Card for "leicester_loaded_annotations_binary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shuyuej/gsm8k_testing_chatgpt_generated | ---
license: apache-2.0
---
# Dataset Construction
The `paraphrased questions` are generated by [gpt-3.5-turbo](https://openai.com/blog/gpt-3-5-turbo-fine-tuning-and-api-updates).
## Prompt Template
```
You are an AI assistant to help me rephrase questions.
Follow the given examples.
Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?
Rephrase the above question: Angelo and Melanie need to study 2 chapters in their textbook and 4 worksheets for their upcoming test. They have planned to dedicate 3 hours for each chapter and 1.5 hours for each worksheet. They can study for a maximum of 4 hours each day, taking into account 10-minute breaks every hour, 3 10-minute snack breaks per day, and 30 minutes for lunch. How many days do they need to study in total over the next week to complete their study plan?
Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?
Rephrase the above question: If Leah had 32 chocolates and her sister had 42, and they both consumed 35 chocolates, what is the total number of chocolates that they have left?
Question: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?
Rephrase the above question: What is the amount of money that Olivia has left after purchasing five bagels for $3 each, if she initially had $23?
Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?
Rephrase the above question: If there were initially nine computers in the server room and five more computers were added each day from Monday to Thursday, what is the current total number of computers in the server room?
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?
Rephrase the above question: After losing 23 golf balls on Tuesday and an additional 2 on Wednesday, how many golf balls does Michael have left if he initially had 58 golf balls?
Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?
Rephrase the above question: If Jason initially had 20 lollipops and now has 12 after giving some to Denny, how many lollipops did he give to Denny?
Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each box. He rearranged five of these boxes into packages of six highlighters each and sold them for $3 per package. He sold the rest of the highlighters separately at the rate of three pens for $2. How much profit did he make in total, in dollars?
Rephrase the above question: Sam purchased 12 boxes, each containing 30 highlighter pens, at $10 per box. He repackaged five of these boxes into sets of six highlighters and sold them for $3 per set. He sold the remaining highlighters individually at a rate of three pens for $2. What is the total profit he made in dollars?
Question: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?
Rephrase the above question: If there were initially 15 trees in the grove and the grove workers are planning to plant more trees today, resulting in a total of 21 trees, how many trees did the workers plant today?
Question: {e['question']}\nRephrase the above question:
```
# Dataset Usage
```python
from datasets import load_dataset
# Load dataset
dataset = load_dataset("shuyuej/gsm8k_testing_chatgpt_generated")
dataset = dataset["test"]
print(dataset)
```
# Citation
If you find our toolkit useful, please consider citing our repo and toolkit in your publications. We provide a BibTeX entry below.
```bibtex
@misc{JiaPromptCraft23,
author = {Jia, Shuyue},
title = {{PromptCraft}: A Prompt Perturbation Toolkit},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/SuperBruceJia/promptcraft}},
}
@misc{JiaAwesomeLLM23,
author = {Jia, Shuyue},
title = {Awesome {LLM} Self-Consistency},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/SuperBruceJia/Awesome-LLM-Self-Consistency}},
}
@misc{JiaAwesomeSTS23,
author = {Jia, Shuyue},
title = {Awesome Semantic Textual Similarity},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/SuperBruceJia/Awesome-Semantic-Textual-Similarity}},
}
``` |
open-llm-leaderboard/details_BAAI__Aquila2-34B | ---
pretty_name: Evaluation run of BAAI/Aquila2-34B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BAAI/Aquila2-34B](https://huggingface.co/BAAI/Aquila2-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BAAI__Aquila2-34B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-15T18:37:14.451844](https://huggingface.co/datasets/open-llm-leaderboard/details_BAAI__Aquila2-34B/blob/main/results_2024-01-15T18-37-14.451844.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7421090841218929,\n\
\ \"acc_stderr\": 0.028617632191882958,\n \"acc_norm\": 0.7572926151712773,\n\
\ \"acc_norm_stderr\": 0.02933086673337662,\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608753,\n \"mc2\": 0.40853761852658155,\n\
\ \"mc2_stderr\": 0.014823421659209666\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5273037542662116,\n \"acc_stderr\": 0.014589589101985994,\n\
\ \"acc_norm\": 0.5247440273037542,\n \"acc_norm_stderr\": 0.01459348769493774\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.643397729535949,\n\
\ \"acc_stderr\": 0.004780169873332854,\n \"acc_norm\": 0.8189603664608643,\n\
\ \"acc_norm_stderr\": 0.0038426408003615128\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.725925925925926,\n\
\ \"acc_stderr\": 0.03853254836552003,\n \"acc_norm\": 0.725925925925926,\n\
\ \"acc_norm_stderr\": 0.03853254836552003\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.033550453048829226,\n\
\ \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.033550453048829226\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n\
\ \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \
\ \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.024618298195866518,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.024618298195866518\n \
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n\
\ \"acc_stderr\": 0.030635578972093288,\n \"acc_norm\": 0.8402777777777778,\n\
\ \"acc_norm_stderr\": 0.030635578972093288\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n\
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.049888765156985884,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.049888765156985884\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7687861271676301,\n\
\ \"acc_stderr\": 0.03214737302029469,\n \"acc_norm\": 0.7687861271676301,\n\
\ \"acc_norm_stderr\": 0.03214737302029469\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n\
\ \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6893617021276596,\n \"acc_stderr\": 0.03025123757921317,\n\
\ \"acc_norm\": 0.6893617021276596,\n \"acc_norm_stderr\": 0.03025123757921317\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5614035087719298,\n\
\ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.5614035087719298,\n\
\ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6896551724137931,\n \"acc_stderr\": 0.03855289616378948,\n\
\ \"acc_norm\": 0.6896551724137931,\n \"acc_norm_stderr\": 0.03855289616378948\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.5740740740740741,\n \"acc_stderr\": 0.02546714904546955,\n \"\
acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.02546714904546955\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8258064516129032,\n\
\ \"acc_stderr\": 0.02157624818451457,\n \"acc_norm\": 0.8258064516129032,\n\
\ \"acc_norm_stderr\": 0.02157624818451457\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n\
\ \"acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\"\
: 0.82,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.028450388805284332,\n\
\ \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.028450388805284332\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.898989898989899,\n \"acc_stderr\": 0.021469735576055353,\n \"\
acc_norm\": 0.898989898989899,\n \"acc_norm_stderr\": 0.021469735576055353\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768738,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768738\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7846153846153846,\n \"acc_stderr\": 0.020843034557462878,\n\
\ \"acc_norm\": 0.7846153846153846,\n \"acc_norm_stderr\": 0.020843034557462878\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.5333333333333333,\n \"acc_stderr\": 0.030417716961717474,\n \
\ \"acc_norm\": 0.5333333333333333,\n \"acc_norm_stderr\": 0.030417716961717474\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7689075630252101,\n \"acc_stderr\": 0.027381406927868886,\n\
\ \"acc_norm\": 0.7689075630252101,\n \"acc_norm_stderr\": 0.027381406927868886\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5695364238410596,\n \"acc_stderr\": 0.04042809961395634,\n \"\
acc_norm\": 0.5695364238410596,\n \"acc_norm_stderr\": 0.04042809961395634\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9064220183486239,\n \"acc_stderr\": 0.012486841824601967,\n \"\
acc_norm\": 0.9064220183486239,\n \"acc_norm_stderr\": 0.012486841824601967\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.7037037037037037,\n \"acc_stderr\": 0.031141447823536023,\n \"\
acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.031141447823536023\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8823529411764706,\n \"acc_stderr\": 0.022613286601132012,\n \"\
acc_norm\": 0.8823529411764706,\n \"acc_norm_stderr\": 0.022613286601132012\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065508,\n \
\ \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065508\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8385650224215246,\n\
\ \"acc_stderr\": 0.02469395789912846,\n \"acc_norm\": 0.8385650224215246,\n\
\ \"acc_norm_stderr\": 0.02469395789912846\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8396946564885496,\n \"acc_stderr\": 0.03217829420744631,\n\
\ \"acc_norm\": 0.8396946564885496,\n \"acc_norm_stderr\": 0.03217829420744631\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.9090909090909091,\n \"acc_stderr\": 0.026243194054073885,\n \"\
acc_norm\": 0.9090909090909091,\n \"acc_norm_stderr\": 0.026243194054073885\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n\
\ \"acc_stderr\": 0.03434300243631,\n \"acc_norm\": 0.8518518518518519,\n\
\ \"acc_norm_stderr\": 0.03434300243631\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.9079754601226994,\n \"acc_stderr\": 0.02271074471568872,\n\
\ \"acc_norm\": 0.9079754601226994,\n \"acc_norm_stderr\": 0.02271074471568872\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n\
\ \"acc_stderr\": 0.04742762361243011,\n \"acc_norm\": 0.5178571428571429,\n\
\ \"acc_norm_stderr\": 0.04742762361243011\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.033932957297610096,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.033932957297610096\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9316239316239316,\n\
\ \"acc_stderr\": 0.016534627684311357,\n \"acc_norm\": 0.9316239316239316,\n\
\ \"acc_norm_stderr\": 0.016534627684311357\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909281,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909281\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8773946360153256,\n\
\ \"acc_stderr\": 0.011728672144131565,\n \"acc_norm\": 0.8773946360153256,\n\
\ \"acc_norm_stderr\": 0.011728672144131565\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8208092485549133,\n \"acc_stderr\": 0.020647590029679332,\n\
\ \"acc_norm\": 0.8208092485549133,\n \"acc_norm_stderr\": 0.020647590029679332\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6547486033519553,\n\
\ \"acc_stderr\": 0.015901432608930358,\n \"acc_norm\": 0.6547486033519553,\n\
\ \"acc_norm_stderr\": 0.015901432608930358\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.0227337894054476,\n\
\ \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.0227337894054476\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.842443729903537,\n\
\ \"acc_stderr\": 0.020692237273583994,\n \"acc_norm\": 0.842443729903537,\n\
\ \"acc_norm_stderr\": 0.020692237273583994\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8364197530864198,\n \"acc_stderr\": 0.020581466138257107,\n\
\ \"acc_norm\": 0.8364197530864198,\n \"acc_norm_stderr\": 0.020581466138257107\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.7092198581560284,\n \"acc_stderr\": 0.027090664368353178,\n \
\ \"acc_norm\": 0.7092198581560284,\n \"acc_norm_stderr\": 0.027090664368353178\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.7073011734028684,\n\
\ \"acc_stderr\": 0.011620949195849536,\n \"acc_norm\": 0.7073011734028684,\n\
\ \"acc_norm_stderr\": 0.011620949195849536\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8272058823529411,\n \"acc_stderr\": 0.02296606758558178,\n\
\ \"acc_norm\": 0.8272058823529411,\n \"acc_norm_stderr\": 0.02296606758558178\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8218954248366013,\n \"acc_stderr\": 0.015478369653108568,\n \
\ \"acc_norm\": 0.8218954248366013,\n \"acc_norm_stderr\": 0.015478369653108568\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.8090909090909091,\n\
\ \"acc_stderr\": 0.03764425585984927,\n \"acc_norm\": 0.8090909090909091,\n\
\ \"acc_norm_stderr\": 0.03764425585984927\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8653061224489796,\n \"acc_stderr\": 0.021855658840811615,\n\
\ \"acc_norm\": 0.8653061224489796,\n \"acc_norm_stderr\": 0.021855658840811615\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9203980099502488,\n\
\ \"acc_stderr\": 0.01913968563350382,\n \"acc_norm\": 0.9203980099502488,\n\
\ \"acc_norm_stderr\": 0.01913968563350382\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.93,\n \"acc_stderr\": 0.025643239997624294,\n \
\ \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.025643239997624294\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.7891566265060241,\n\
\ \"acc_stderr\": 0.03175554786629919,\n \"acc_norm\": 0.7891566265060241,\n\
\ \"acc_norm_stderr\": 0.03175554786629919\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.9064327485380117,\n \"acc_stderr\": 0.02233599323116327,\n\
\ \"acc_norm\": 0.9064327485380117,\n \"acc_norm_stderr\": 0.02233599323116327\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n\
\ \"mc1_stderr\": 0.015723139524608753,\n \"mc2\": 0.40853761852658155,\n\
\ \"mc2_stderr\": 0.014823421659209666\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.755327545382794,\n \"acc_stderr\": 0.012082125654159738\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006065200909780136,\n \
\ \"acc_stderr\": 0.00213867030146047\n }\n}\n```"
repo_url: https://huggingface.co/BAAI/Aquila2-34B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|arc:challenge|25_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|arc:challenge|25_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|gsm8k|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|gsm8k|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hellaswag|10_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hellaswag|10_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-27-33.218553.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-15T18-37-14.451844.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- '**/details_harness|winogrande|5_2024-01-15T18-27-33.218553.parquet'
- split: 2024_01_15T18_37_14.451844
path:
- '**/details_harness|winogrande|5_2024-01-15T18-37-14.451844.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-15T18-37-14.451844.parquet'
- config_name: results
data_files:
- split: 2024_01_15T18_27_33.218553
path:
- results_2024-01-15T18-27-33.218553.parquet
- split: 2024_01_15T18_37_14.451844
path:
- results_2024-01-15T18-37-14.451844.parquet
- split: latest
path:
- results_2024-01-15T18-37-14.451844.parquet
---
# Dataset Card for Evaluation run of BAAI/Aquila2-34B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BAAI/Aquila2-34B](https://huggingface.co/BAAI/Aquila2-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BAAI__Aquila2-34B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T18:37:14.451844](https://huggingface.co/datasets/open-llm-leaderboard/details_BAAI__Aquila2-34B/blob/main/results_2024-01-15T18-37-14.451844.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7421090841218929,
"acc_stderr": 0.028617632191882958,
"acc_norm": 0.7572926151712773,
"acc_norm_stderr": 0.02933086673337662,
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608753,
"mc2": 0.40853761852658155,
"mc2_stderr": 0.014823421659209666
},
"harness|arc:challenge|25": {
"acc": 0.5273037542662116,
"acc_stderr": 0.014589589101985994,
"acc_norm": 0.5247440273037542,
"acc_norm_stderr": 0.01459348769493774
},
"harness|hellaswag|10": {
"acc": 0.643397729535949,
"acc_stderr": 0.004780169873332854,
"acc_norm": 0.8189603664608643,
"acc_norm_stderr": 0.0038426408003615128
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.725925925925926,
"acc_stderr": 0.03853254836552003,
"acc_norm": 0.725925925925926,
"acc_norm_stderr": 0.03853254836552003
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7828947368421053,
"acc_stderr": 0.033550453048829226,
"acc_norm": 0.7828947368421053,
"acc_norm_stderr": 0.033550453048829226
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8,
"acc_stderr": 0.024618298195866518,
"acc_norm": 0.8,
"acc_norm_stderr": 0.024618298195866518
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8402777777777778,
"acc_stderr": 0.030635578972093288,
"acc_norm": 0.8402777777777778,
"acc_norm_stderr": 0.030635578972093288
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.56,
"acc_stderr": 0.049888765156985884,
"acc_norm": 0.56,
"acc_norm_stderr": 0.049888765156985884
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7687861271676301,
"acc_stderr": 0.03214737302029469,
"acc_norm": 0.7687861271676301,
"acc_norm_stderr": 0.03214737302029469
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6893617021276596,
"acc_stderr": 0.03025123757921317,
"acc_norm": 0.6893617021276596,
"acc_norm_stderr": 0.03025123757921317
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5614035087719298,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.5614035087719298,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6896551724137931,
"acc_stderr": 0.03855289616378948,
"acc_norm": 0.6896551724137931,
"acc_norm_stderr": 0.03855289616378948
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5740740740740741,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.5740740740740741,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8258064516129032,
"acc_stderr": 0.02157624818451457,
"acc_norm": 0.8258064516129032,
"acc_norm_stderr": 0.02157624818451457
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6600985221674877,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.6600985221674877,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8424242424242424,
"acc_stderr": 0.028450388805284332,
"acc_norm": 0.8424242424242424,
"acc_norm_stderr": 0.028450388805284332
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.898989898989899,
"acc_stderr": 0.021469735576055353,
"acc_norm": 0.898989898989899,
"acc_norm_stderr": 0.021469735576055353
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768738,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768738
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7846153846153846,
"acc_stderr": 0.020843034557462878,
"acc_norm": 0.7846153846153846,
"acc_norm_stderr": 0.020843034557462878
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.5333333333333333,
"acc_stderr": 0.030417716961717474,
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.030417716961717474
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7689075630252101,
"acc_stderr": 0.027381406927868886,
"acc_norm": 0.7689075630252101,
"acc_norm_stderr": 0.027381406927868886
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5695364238410596,
"acc_stderr": 0.04042809961395634,
"acc_norm": 0.5695364238410596,
"acc_norm_stderr": 0.04042809961395634
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9064220183486239,
"acc_stderr": 0.012486841824601967,
"acc_norm": 0.9064220183486239,
"acc_norm_stderr": 0.012486841824601967
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.031141447823536023,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.031141447823536023
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8823529411764706,
"acc_stderr": 0.022613286601132012,
"acc_norm": 0.8823529411764706,
"acc_norm_stderr": 0.022613286601132012
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065508,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065508
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8385650224215246,
"acc_stderr": 0.02469395789912846,
"acc_norm": 0.8385650224215246,
"acc_norm_stderr": 0.02469395789912846
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8396946564885496,
"acc_stderr": 0.03217829420744631,
"acc_norm": 0.8396946564885496,
"acc_norm_stderr": 0.03217829420744631
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9090909090909091,
"acc_stderr": 0.026243194054073885,
"acc_norm": 0.9090909090909091,
"acc_norm_stderr": 0.026243194054073885
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243631,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243631
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.9079754601226994,
"acc_stderr": 0.02271074471568872,
"acc_norm": 0.9079754601226994,
"acc_norm_stderr": 0.02271074471568872
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.04742762361243011,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.04742762361243011
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.033932957297610096,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.033932957297610096
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9316239316239316,
"acc_stderr": 0.016534627684311357,
"acc_norm": 0.9316239316239316,
"acc_norm_stderr": 0.016534627684311357
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8773946360153256,
"acc_stderr": 0.011728672144131565,
"acc_norm": 0.8773946360153256,
"acc_norm_stderr": 0.011728672144131565
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8208092485549133,
"acc_stderr": 0.020647590029679332,
"acc_norm": 0.8208092485549133,
"acc_norm_stderr": 0.020647590029679332
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6547486033519553,
"acc_stderr": 0.015901432608930358,
"acc_norm": 0.6547486033519553,
"acc_norm_stderr": 0.015901432608930358
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.0227337894054476,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.0227337894054476
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.842443729903537,
"acc_stderr": 0.020692237273583994,
"acc_norm": 0.842443729903537,
"acc_norm_stderr": 0.020692237273583994
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8364197530864198,
"acc_stderr": 0.020581466138257107,
"acc_norm": 0.8364197530864198,
"acc_norm_stderr": 0.020581466138257107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.7092198581560284,
"acc_stderr": 0.027090664368353178,
"acc_norm": 0.7092198581560284,
"acc_norm_stderr": 0.027090664368353178
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.7073011734028684,
"acc_stderr": 0.011620949195849536,
"acc_norm": 0.7073011734028684,
"acc_norm_stderr": 0.011620949195849536
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8272058823529411,
"acc_stderr": 0.02296606758558178,
"acc_norm": 0.8272058823529411,
"acc_norm_stderr": 0.02296606758558178
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8218954248366013,
"acc_stderr": 0.015478369653108568,
"acc_norm": 0.8218954248366013,
"acc_norm_stderr": 0.015478369653108568
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.8090909090909091,
"acc_stderr": 0.03764425585984927,
"acc_norm": 0.8090909090909091,
"acc_norm_stderr": 0.03764425585984927
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8653061224489796,
"acc_stderr": 0.021855658840811615,
"acc_norm": 0.8653061224489796,
"acc_norm_stderr": 0.021855658840811615
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9203980099502488,
"acc_stderr": 0.01913968563350382,
"acc_norm": 0.9203980099502488,
"acc_norm_stderr": 0.01913968563350382
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.025643239997624294,
"acc_norm": 0.93,
"acc_norm_stderr": 0.025643239997624294
},
"harness|hendrycksTest-virology|5": {
"acc": 0.7891566265060241,
"acc_stderr": 0.03175554786629919,
"acc_norm": 0.7891566265060241,
"acc_norm_stderr": 0.03175554786629919
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9064327485380117,
"acc_stderr": 0.02233599323116327,
"acc_norm": 0.9064327485380117,
"acc_norm_stderr": 0.02233599323116327
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2802937576499388,
"mc1_stderr": 0.015723139524608753,
"mc2": 0.40853761852658155,
"mc2_stderr": 0.014823421659209666
},
"harness|winogrande|5": {
"acc": 0.755327545382794,
"acc_stderr": 0.012082125654159738
},
"harness|gsm8k|5": {
"acc": 0.006065200909780136,
"acc_stderr": 0.00213867030146047
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_172 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1064014136.0
num_examples: 208958
download_size: 1077742450
dataset_size: 1064014136.0
---
# Dataset Card for "chunk_172"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hojzas/proj8-label2 | ---
license: apache-2.0
---
|
pragmeval | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: pragmeval
dataset_info:
- config_name: verifiability
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': experiential
'1': unverifiable
'2': non-experiential
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 592520
num_examples: 5712
- name: validation
num_bytes: 65215
num_examples: 634
- name: test
num_bytes: 251799
num_examples: 2424
download_size: 5330724
dataset_size: 909534
- config_name: emobank-arousal
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 567660
num_examples: 5470
- name: validation
num_bytes: 71221
num_examples: 684
- name: test
num_bytes: 69276
num_examples: 683
download_size: 5330724
dataset_size: 708157
- config_name: switchboard
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': Response Acknowledgement
'1': Uninterpretable
'2': Or-Clause
'3': Reject
'4': Statement-non-opinion
'5': 3rd-party-talk
'6': Repeat-phrase
'7': Hold Before Answer/Agreement
'8': Signal-non-understanding
'9': Offers, Options Commits
'10': Agree/Accept
'11': Dispreferred Answers
'12': Hedge
'13': Action-directive
'14': Tag-Question
'15': Self-talk
'16': Yes-No-Question
'17': Rhetorical-Question
'18': No Answers
'19': Open-Question
'20': Conventional-closing
'21': Other Answers
'22': Acknowledge (Backchannel)
'23': Wh-Question
'24': Declarative Wh-Question
'25': Thanking
'26': Yes Answers
'27': Affirmative Non-yes Answers
'28': Declarative Yes-No-Question
'29': Backchannel in Question Form
'30': Apology
'31': Downplayer
'32': Conventional-opening
'33': Collaborative Completion
'34': Summarize/Reformulate
'35': Negative Non-no Answers
'36': Statement-opinion
'37': Appreciation
'38': Other
'39': Quotation
'40': Maybe/Accept-part
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 1021220
num_examples: 18930
- name: validation
num_bytes: 116058
num_examples: 2113
- name: test
num_bytes: 34013
num_examples: 649
download_size: 5330724
dataset_size: 1171291
- config_name: persuasiveness-eloquence
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 153946
num_examples: 725
- name: validation
num_bytes: 19376
num_examples: 91
- name: test
num_bytes: 18379
num_examples: 90
download_size: 5330724
dataset_size: 191701
- config_name: mrda
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': Declarative-Question
'1': Statement
'2': Reject
'3': Or-Clause
'4': 3rd-party-talk
'5': Continuer
'6': Hold Before Answer/Agreement
'7': Assessment/Appreciation
'8': Signal-non-understanding
'9': Floor Holder
'10': Sympathy
'11': Dispreferred Answers
'12': Reformulate/Summarize
'13': Exclamation
'14': Interrupted/Abandoned/Uninterpretable
'15': Expansions of y/n Answers
'16': Action-directive
'17': Tag-Question
'18': Accept
'19': Rhetorical-question Continue
'20': Self-talk
'21': Rhetorical-Question
'22': Yes-No-question
'23': Open-Question
'24': Rising Tone
'25': Other Answers
'26': Commit
'27': Wh-Question
'28': Repeat
'29': Follow Me
'30': Thanking
'31': Offer
'32': About-task
'33': Reject-part
'34': Affirmative Non-yes Answers
'35': Apology
'36': Downplayer
'37': Humorous Material
'38': Accept-part
'39': Collaborative Completion
'40': Mimic Other
'41': Understanding Check
'42': Misspeak Self-Correction
'43': Or-Question
'44': Topic Change
'45': Negative Non-no Answers
'46': Floor Grabber
'47': Correct-misspeaking
'48': Maybe
'49': Acknowledge-answer
'50': Defending/Explanation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 963913
num_examples: 14484
- name: validation
num_bytes: 111813
num_examples: 1630
- name: test
num_bytes: 419797
num_examples: 6459
download_size: 5330724
dataset_size: 1495523
- config_name: gum
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': preparation
'1': evaluation
'2': circumstance
'3': solutionhood
'4': justify
'5': result
'6': evidence
'7': purpose
'8': concession
'9': elaboration
'10': background
'11': condition
'12': cause
'13': restatement
'14': motivation
'15': antithesis
'16': no_relation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 270401
num_examples: 1700
- name: validation
num_bytes: 35405
num_examples: 259
- name: test
num_bytes: 40334
num_examples: 248
download_size: 5330724
dataset_size: 346140
- config_name: emergent
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': observing
'1': for
'2': against
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 313257
num_examples: 2076
- name: validation
num_bytes: 38948
num_examples: 259
- name: test
num_bytes: 38842
num_examples: 259
download_size: 5330724
dataset_size: 391047
- config_name: persuasiveness-relevance
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 153158
num_examples: 725
- name: validation
num_bytes: 19663
num_examples: 91
- name: test
num_bytes: 18880
num_examples: 90
download_size: 5330724
dataset_size: 191701
- config_name: persuasiveness-specificity
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 106594
num_examples: 504
- name: validation
num_bytes: 13766
num_examples: 62
- name: test
num_bytes: 12712
num_examples: 62
download_size: 5330724
dataset_size: 133072
- config_name: persuasiveness-strength
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 79679
num_examples: 371
- name: validation
num_bytes: 10052
num_examples: 46
- name: test
num_bytes: 10225
num_examples: 46
download_size: 5330724
dataset_size: 99956
- config_name: emobank-dominance
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 660303
num_examples: 6392
- name: validation
num_bytes: 86802
num_examples: 798
- name: test
num_bytes: 83319
num_examples: 798
download_size: 5330724
dataset_size: 830424
- config_name: squinky-implicature
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 471552
num_examples: 3724
- name: validation
num_bytes: 58087
num_examples: 465
- name: test
num_bytes: 56549
num_examples: 465
download_size: 5330724
dataset_size: 586188
- config_name: sarcasm
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': notsarc
'1': sarc
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2177332
num_examples: 3754
- name: validation
num_bytes: 257834
num_examples: 469
- name: test
num_bytes: 269724
num_examples: 469
download_size: 5330724
dataset_size: 2704890
- config_name: squinky-formality
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 459721
num_examples: 3622
- name: validation
num_bytes: 59921
num_examples: 453
- name: test
num_bytes: 58242
num_examples: 452
download_size: 5330724
dataset_size: 577884
- config_name: stac
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Comment
'1': Contrast
'2': Q_Elab
'3': Parallel
'4': Explanation
'5': Narration
'6': Continuation
'7': Result
'8': Acknowledgement
'9': Alternation
'10': Question_answer_pair
'11': Correction
'12': Clarification_question
'13': Conditional
'14': Sequence
'15': Elaboration
'16': Background
'17': no_relation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 645969
num_examples: 11230
- name: validation
num_bytes: 71400
num_examples: 1247
- name: test
num_bytes: 70451
num_examples: 1304
download_size: 5330724
dataset_size: 787820
- config_name: pdtb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Synchrony
'1': Contrast
'2': Asynchronous
'3': Conjunction
'4': List
'5': Condition
'6': Pragmatic concession
'7': Restatement
'8': Pragmatic cause
'9': Alternative
'10': Pragmatic condition
'11': Pragmatic contrast
'12': Instantiation
'13': Exception
'14': Cause
'15': Concession
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2968638
num_examples: 12907
- name: validation
num_bytes: 276997
num_examples: 1204
- name: test
num_bytes: 235851
num_examples: 1085
download_size: 5330724
dataset_size: 3481486
- config_name: persuasiveness-premisetype
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': testimony
'1': warrant
'2': invented_instance
'3': common_knowledge
'4': statistics
'5': analogy
'6': definition
'7': real_example
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 122631
num_examples: 566
- name: validation
num_bytes: 15920
num_examples: 71
- name: test
num_bytes: 14395
num_examples: 70
download_size: 5330724
dataset_size: 152946
- config_name: squinky-informativeness
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 464855
num_examples: 3719
- name: validation
num_bytes: 60447
num_examples: 465
- name: test
num_bytes: 56872
num_examples: 464
download_size: 5330724
dataset_size: 582174
- config_name: persuasiveness-claimtype
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Value
'1': Fact
'2': Policy
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 31259
num_examples: 160
- name: validation
num_bytes: 3803
num_examples: 20
- name: test
num_bytes: 3717
num_examples: 19
download_size: 5330724
dataset_size: 38779
- config_name: emobank-valence
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 539652
num_examples: 5150
- name: validation
num_bytes: 62809
num_examples: 644
- name: test
num_bytes: 66178
num_examples: 643
download_size: 5330724
dataset_size: 668639
config_names:
- emergent
- emobank-arousal
- emobank-dominance
- emobank-valence
- gum
- mrda
- pdtb
- persuasiveness-claimtype
- persuasiveness-eloquence
- persuasiveness-premisetype
- persuasiveness-relevance
- persuasiveness-specificity
- persuasiveness-strength
- sarcasm
- squinky-formality
- squinky-implicature
- squinky-informativeness
- stac
- switchboard
- verifiability
---
# Dataset Card for pragmeval
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sileod](https://github.com/sileod) for adding this dataset. |
carlosejimenez/wikitext-2__llama__block-size-1024 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 4467438
num_examples: 331
- name: train
num_bytes: 1864955854
num_examples: 137385
- name: validation
num_bytes: 3955329
num_examples: 291
download_size: 553105401
dataset_size: 1873378621
---
# Dataset Card for "wikitext-2__llama__block-size-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alf894/faqDataset | ---
license: mit
---
|
ovior/twitter_dataset_1713009155 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2816834
num_examples: 8373
download_size: 1614377
dataset_size: 2816834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/soleil_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of soleil (Fire Emblem)
This is the dataset of soleil (Fire Emblem), containing 156 images and their tags.
The core tags of this character are `pink_hair, long_hair, hairband, breasts, pink_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 156 | 137.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soleil_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 156 | 86.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soleil_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 334 | 174.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soleil_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 156 | 124.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soleil_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 334 | 231.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soleil_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/soleil_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, hetero, penis, 1boy, solo_focus, nipples, sex, blush, vaginal, spread_legs, censored, cum_in_pussy, navel, smile, medium_breasts, open_mouth, completely_nude, large_breasts |
| 1 | 29 |  |  |  |  |  | 1girl, solo, smile, simple_background, gloves, white_background, armor, looking_at_viewer, open_mouth, sword |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | hetero | penis | 1boy | solo_focus | nipples | sex | blush | vaginal | spread_legs | censored | cum_in_pussy | navel | smile | medium_breasts | open_mouth | completely_nude | large_breasts | solo | simple_background | gloves | white_background | armor | looking_at_viewer | sword |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:--------|:-------|:-------------|:----------|:------|:--------|:----------|:--------------|:-----------|:---------------|:--------|:--------|:-----------------|:-------------|:------------------|:----------------|:-------|:--------------------|:---------|:-------------------|:--------|:--------------------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 29 |  |  |  |  |  | X | | | | | | | | | | | | | X | | X | | | X | X | X | X | X | X | X |
|
BunnyToon/nilce | ---
license: openrail
---
|
ayoubkirouane/arxiv-math | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 35436503.0
num_examples: 50488
download_size: 18875033
dataset_size: 35436503.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NEUDM/towe | ---
language:
- en
---
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。
#### 以acos数据集中抽取的jsonl文件一条数据举例:
```
{
"task_type": "generation",
"dataset": "acos",
"input": ["the computer has difficulty switching between tablet and computer ."],
"output": "[['computer', 'laptop usability', 'negative', 'difficulty']]",
"situation": "none",
"label": "",
"extra": "",
"instruction": "
Task: Extracting aspect terms and their corresponding aspect categories, sentiment polarities, and opinion words.
Input: A sentence
Output: A list of 4-tuples, where each tuple contains the extracted aspect term, its aspect category, sentiment polarity, and opinion words (if any). Supplement: \"Null\" means that there is no occurrence in the sentence.
Example:
Sentence: \"Also it's not a true SSD drive in there but eMMC, which makes a difference.\"
Output: [['SSD drive', 'hard_disc operation_performance', 'negative', 'NULL']]'
"
}
```
> 此处未设置label和extra,在instruction中以如上所示的字符串模板,并给出一个例子进行one-shot,ABSA领域数据集(absa-quad,acos,arts,aste-data-v2,mams,semeval-2014,semeval-2015,semeval-2016,towe)每个数据集对应instruction模板相同,内容有细微不同,且部分数据集存在同一数据集不同数据instruction内容不同的情况。
#### 原始数据集
- 数据[链接](https://github.com/NJUNLP/TOWE)
- Paper:[Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling](https://aclanthology.org/N19-1259/)
- 说明:原始数据由laptop14、restuarant14、restuarant15和restuarant16四个文件夹组成,四个文件夹的数据不同,但抽取的元素相同
#### 当前SOTA
*数据来自[论文](https://aclanthology.org/N19-1259/)*
- 评价指标:F1-Score
- 模型:IOG
- laptop14:**71.35**
- restuarant14:**80.02**
- restuarant15:**73.25**
- restuarant16:**81.69**
- Paper:Paper:[Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling](https://aclanthology.org/N19-1259/)
- 说明:TOWE论文贡献为提出ABSA新的子任务(TOWE),并创建了新的数据集,但是据我在[google scholar](https://scholar.google.com/scholar?as_ylo=2023&hl=zh-CN&as_sdt=2005&sciodt=0,5&cites=10978596531168101977&scipsc=)的初步调研发现虽然后较多工作引用该论文,但是均未使用该TOWE数据集,因此暂且将提出TOWE数据集的论文模型作为SOTA模型
|
johnowhitaker/mcqgen_1k_initial_examples | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: correct_answer
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 902358
num_examples: 975
download_size: 558885
dataset_size: 902358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mcqgen_1k_initial_examples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Bruss/entidades_requisitos | ---
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_DrNicefellow__Mistral-4-from-Mixtral-8x7B-v0.1 | ---
pretty_name: Evaluation run of DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1](https://huggingface.co/DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DrNicefellow__Mistral-4-from-Mixtral-8x7B-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-15T19:22:34.839658](https://huggingface.co/datasets/open-llm-leaderboard/details_DrNicefellow__Mistral-4-from-Mixtral-8x7B-v0.1/blob/main/results_2024-04-15T19-22-34.839658.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24768201532826062,\n\
\ \"acc_stderr\": 0.03028171804374249,\n \"acc_norm\": 0.24936515830727307,\n\
\ \"acc_norm_stderr\": 0.03109188686084462,\n \"mc1\": 0.2423500611995104,\n\
\ \"mc1_stderr\": 0.01500067437357034,\n \"mc2\": 0.485079980098346,\n\
\ \"mc2_stderr\": 0.016210663798591776\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.21245733788395904,\n \"acc_stderr\": 0.011953482906582954,\n\
\ \"acc_norm\": 0.28242320819112626,\n \"acc_norm_stderr\": 0.01315545688409722\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.26070503883688506,\n\
\ \"acc_stderr\": 0.004381220409641171,\n \"acc_norm\": 0.2753435570603465,\n\
\ \"acc_norm_stderr\": 0.004457743287380273\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2814814814814815,\n\
\ \"acc_stderr\": 0.03885004245800255,\n \"acc_norm\": 0.2814814814814815,\n\
\ \"acc_norm_stderr\": 0.03885004245800255\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.28,\n\
\ \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.02544786382510861,\n\
\ \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.02544786382510861\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2152777777777778,\n\
\ \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.2152777777777778,\n\
\ \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.14,\n \"acc_stderr\": 0.034873508801977725,\n \"acc_norm\"\
: 0.14,\n \"acc_norm_stderr\": 0.034873508801977725\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2138728323699422,\n\
\ \"acc_stderr\": 0.03126511206173043,\n \"acc_norm\": 0.2138728323699422,\n\
\ \"acc_norm_stderr\": 0.03126511206173043\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.22,\n \"acc_stderr\": 0.041633319989322716,\n \"acc_norm\": 0.22,\n\
\ \"acc_norm_stderr\": 0.041633319989322716\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.19574468085106383,\n \"acc_stderr\": 0.025937853139977148,\n\
\ \"acc_norm\": 0.19574468085106383,\n \"acc_norm_stderr\": 0.025937853139977148\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n\
\ \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n\
\ \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.21379310344827587,\n \"acc_stderr\": 0.034165204477475494,\n\
\ \"acc_norm\": 0.21379310344827587,\n \"acc_norm_stderr\": 0.034165204477475494\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2566137566137566,\n \"acc_stderr\": 0.022494510767503154,\n \"\
acc_norm\": 0.2566137566137566,\n \"acc_norm_stderr\": 0.022494510767503154\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.14285714285714285,\n\
\ \"acc_stderr\": 0.03129843185743809,\n \"acc_norm\": 0.14285714285714285,\n\
\ \"acc_norm_stderr\": 0.03129843185743809\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.3032258064516129,\n \"acc_stderr\": 0.02614868593067175,\n \"\
acc_norm\": 0.3032258064516129,\n \"acc_norm_stderr\": 0.02614868593067175\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"\
acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.22424242424242424,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.22424242424242424,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.22727272727272727,\n \"acc_stderr\": 0.02985751567338642,\n \"\
acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.02985751567338642\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.23316062176165803,\n \"acc_stderr\": 0.030516111371476008,\n\
\ \"acc_norm\": 0.23316062176165803,\n \"acc_norm_stderr\": 0.030516111371476008\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2128205128205128,\n \"acc_stderr\": 0.020752423722128013,\n\
\ \"acc_norm\": 0.2128205128205128,\n \"acc_norm_stderr\": 0.020752423722128013\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.26666666666666666,\n \"acc_stderr\": 0.026962424325073828,\n \
\ \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.026962424325073828\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.026265024608275886,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.026265024608275886\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.24403669724770644,\n \"acc_stderr\": 0.01841528635141641,\n \"\
acc_norm\": 0.24403669724770644,\n \"acc_norm_stderr\": 0.01841528635141641\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23039215686274508,\n\
\ \"acc_stderr\": 0.029554292605695063,\n \"acc_norm\": 0.23039215686274508,\n\
\ \"acc_norm_stderr\": 0.029554292605695063\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n\
\ \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3542600896860987,\n\
\ \"acc_stderr\": 0.03210062154134986,\n \"acc_norm\": 0.3542600896860987,\n\
\ \"acc_norm_stderr\": 0.03210062154134986\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n\
\ \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794089,\n \"\
acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794089\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.24074074074074073,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.27607361963190186,\n \"acc_stderr\": 0.03512385283705051,\n\
\ \"acc_norm\": 0.27607361963190186,\n \"acc_norm_stderr\": 0.03512385283705051\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n\
\ \"acc_stderr\": 0.04059867246952687,\n \"acc_norm\": 0.24107142857142858,\n\
\ \"acc_norm_stderr\": 0.04059867246952687\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19658119658119658,\n\
\ \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.19658119658119658,\n\
\ \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26947637292464877,\n\
\ \"acc_stderr\": 0.01586624307321506,\n \"acc_norm\": 0.26947637292464877,\n\
\ \"acc_norm_stderr\": 0.01586624307321506\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n\
\ \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n\
\ \"acc_stderr\": 0.01421957078810399,\n \"acc_norm\": 0.23687150837988827,\n\
\ \"acc_norm_stderr\": 0.01421957078810399\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n\
\ \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n\
\ \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.023132376234543336,\n\
\ \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.023132376234543336\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.19148936170212766,\n \"acc_stderr\": 0.023472645247949456,\n \
\ \"acc_norm\": 0.19148936170212766,\n \"acc_norm_stderr\": 0.023472645247949456\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24445893089960888,\n\
\ \"acc_stderr\": 0.010976425013113902,\n \"acc_norm\": 0.24445893089960888,\n\
\ \"acc_norm_stderr\": 0.010976425013113902\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.030134614954403924,\n \
\ \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.030134614954403924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.25163398692810457,\n \"acc_stderr\": 0.01755581809132226,\n \
\ \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.01755581809132226\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n\
\ \"acc_stderr\": 0.04013964554072775,\n \"acc_norm\": 0.22727272727272727,\n\
\ \"acc_norm_stderr\": 0.04013964554072775\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.3346938775510204,\n \"acc_stderr\": 0.030209235226242304,\n\
\ \"acc_norm\": 0.3346938775510204,\n \"acc_norm_stderr\": 0.030209235226242304\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n\
\ \"acc_stderr\": 0.030147775935409224,\n \"acc_norm\": 0.23880597014925373,\n\
\ \"acc_norm_stderr\": 0.030147775935409224\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.27710843373493976,\n\
\ \"acc_stderr\": 0.03484331592680588,\n \"acc_norm\": 0.27710843373493976,\n\
\ \"acc_norm_stderr\": 0.03484331592680588\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.0312678171466318,\n\
\ \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.0312678171466318\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2423500611995104,\n\
\ \"mc1_stderr\": 0.01500067437357034,\n \"mc2\": 0.485079980098346,\n\
\ \"mc2_stderr\": 0.016210663798591776\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48066298342541436,\n \"acc_stderr\": 0.014041972733712977\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|arc:challenge|25_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|gsm8k|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hellaswag|10_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-22-34.839658.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T19-22-34.839658.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- '**/details_harness|winogrande|5_2024-04-15T19-22-34.839658.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-15T19-22-34.839658.parquet'
- config_name: results
data_files:
- split: 2024_04_15T19_22_34.839658
path:
- results_2024-04-15T19-22-34.839658.parquet
- split: latest
path:
- results_2024-04-15T19-22-34.839658.parquet
---
# Dataset Card for Evaluation run of DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1](https://huggingface.co/DrNicefellow/Mistral-4-from-Mixtral-8x7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_DrNicefellow__Mistral-4-from-Mixtral-8x7B-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-15T19:22:34.839658](https://huggingface.co/datasets/open-llm-leaderboard/details_DrNicefellow__Mistral-4-from-Mixtral-8x7B-v0.1/blob/main/results_2024-04-15T19-22-34.839658.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24768201532826062,
"acc_stderr": 0.03028171804374249,
"acc_norm": 0.24936515830727307,
"acc_norm_stderr": 0.03109188686084462,
"mc1": 0.2423500611995104,
"mc1_stderr": 0.01500067437357034,
"mc2": 0.485079980098346,
"mc2_stderr": 0.016210663798591776
},
"harness|arc:challenge|25": {
"acc": 0.21245733788395904,
"acc_stderr": 0.011953482906582954,
"acc_norm": 0.28242320819112626,
"acc_norm_stderr": 0.01315545688409722
},
"harness|hellaswag|10": {
"acc": 0.26070503883688506,
"acc_stderr": 0.004381220409641171,
"acc_norm": 0.2753435570603465,
"acc_norm_stderr": 0.004457743287380273
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.03885004245800255,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.03885004245800255
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2188679245283019,
"acc_stderr": 0.02544786382510861,
"acc_norm": 0.2188679245283019,
"acc_norm_stderr": 0.02544786382510861
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2152777777777778,
"acc_stderr": 0.03437079344106135,
"acc_norm": 0.2152777777777778,
"acc_norm_stderr": 0.03437079344106135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.14,
"acc_stderr": 0.034873508801977725,
"acc_norm": 0.14,
"acc_norm_stderr": 0.034873508801977725
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2138728323699422,
"acc_stderr": 0.03126511206173043,
"acc_norm": 0.2138728323699422,
"acc_norm_stderr": 0.03126511206173043
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322716,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322716
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.19574468085106383,
"acc_stderr": 0.025937853139977148,
"acc_norm": 0.19574468085106383,
"acc_norm_stderr": 0.025937853139977148
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748141,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748141
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.21379310344827587,
"acc_stderr": 0.034165204477475494,
"acc_norm": 0.21379310344827587,
"acc_norm_stderr": 0.034165204477475494
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2566137566137566,
"acc_stderr": 0.022494510767503154,
"acc_norm": 0.2566137566137566,
"acc_norm_stderr": 0.022494510767503154
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.14285714285714285,
"acc_stderr": 0.03129843185743809,
"acc_norm": 0.14285714285714285,
"acc_norm_stderr": 0.03129843185743809
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3032258064516129,
"acc_stderr": 0.02614868593067175,
"acc_norm": 0.3032258064516129,
"acc_norm_stderr": 0.02614868593067175
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.22424242424242424,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.22424242424242424,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.02985751567338642,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.02985751567338642
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.23316062176165803,
"acc_stderr": 0.030516111371476008,
"acc_norm": 0.23316062176165803,
"acc_norm_stderr": 0.030516111371476008
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2128205128205128,
"acc_stderr": 0.020752423722128013,
"acc_norm": 0.2128205128205128,
"acc_norm_stderr": 0.020752423722128013
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.026962424325073828,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.026962424325073828
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.026265024608275886,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.026265024608275886
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.24403669724770644,
"acc_stderr": 0.01841528635141641,
"acc_norm": 0.24403669724770644,
"acc_norm_stderr": 0.01841528635141641
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.23039215686274508,
"acc_stderr": 0.029554292605695063,
"acc_norm": 0.23039215686274508,
"acc_norm_stderr": 0.029554292605695063
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2616033755274262,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.2616033755274262,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.3542600896860987,
"acc_stderr": 0.03210062154134986,
"acc_norm": 0.3542600896860987,
"acc_norm_stderr": 0.03210062154134986
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.23140495867768596,
"acc_stderr": 0.03849856098794089,
"acc_norm": 0.23140495867768596,
"acc_norm_stderr": 0.03849856098794089
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.27607361963190186,
"acc_stderr": 0.03512385283705051,
"acc_norm": 0.27607361963190186,
"acc_norm_stderr": 0.03512385283705051
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952687,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952687
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19658119658119658,
"acc_stderr": 0.02603538609895129,
"acc_norm": 0.19658119658119658,
"acc_norm_stderr": 0.02603538609895129
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26947637292464877,
"acc_stderr": 0.01586624307321506,
"acc_norm": 0.26947637292464877,
"acc_norm_stderr": 0.01586624307321506
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.02317629820399201,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.02317629820399201
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.01421957078810399,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.01421957078810399
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.023132376234543336,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.023132376234543336
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.19148936170212766,
"acc_stderr": 0.023472645247949456,
"acc_norm": 0.19148936170212766,
"acc_norm_stderr": 0.023472645247949456
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24445893089960888,
"acc_stderr": 0.010976425013113902,
"acc_norm": 0.24445893089960888,
"acc_norm_stderr": 0.010976425013113902
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4375,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25163398692810457,
"acc_stderr": 0.01755581809132226,
"acc_norm": 0.25163398692810457,
"acc_norm_stderr": 0.01755581809132226
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.04013964554072775,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.04013964554072775
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3346938775510204,
"acc_stderr": 0.030209235226242304,
"acc_norm": 0.3346938775510204,
"acc_norm_stderr": 0.030209235226242304
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409224,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409224
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.27710843373493976,
"acc_stderr": 0.03484331592680588,
"acc_norm": 0.27710843373493976,
"acc_norm_stderr": 0.03484331592680588
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2423500611995104,
"mc1_stderr": 0.01500067437357034,
"mc2": 0.485079980098346,
"mc2_stderr": 0.016210663798591776
},
"harness|winogrande|5": {
"acc": 0.48066298342541436,
"acc_stderr": 0.014041972733712977
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fcbcd1-15976191 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['rouge', 'accuracy', 'exact_match']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
unanam/mdrama | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcripts
dtype: string
splits:
- name: train
num_bytes: 3838962134.5036316
num_examples: 2539
- name: test
num_bytes: 489741268.705104
num_examples: 318
- name: valid
num_bytes: 498400934.363264
num_examples: 317
download_size: 3094162480
dataset_size: 4827104337.572
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
svjack/cmmlu_ed | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
tyzhu/squad_qa_title_v5_full_add3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7461591
num_examples: 5073
- name: validation
num_bytes: 353148
num_examples: 300
download_size: 1450713
dataset_size: 7814739
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
sagawa/ord-uniq-canonicalized | ---
annotations_creators: []
language_creators: []
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: canonicalized ORD
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- ord
- chemical
- reaction
task_categories:
- text2text-generation
- translation
task_ids: []
---
### dataset description
We downloaded open-reaction-database(ORD) dataset from [here](https://github.com/open-reaction-database/ord-data). As a preprocess, we removed overlapping data and canonicalized them using RDKit.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1. |
teilomillet/system_prompt | ---
license: cc-by-4.0
---
|
haoxiangsnr/Intel-N-DNS-Test-Dataset | ---
license: mit
---
|
zion84006/speech_chatgpt | ---
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 206456352
num_examples: 5311
- name: validation
num_bytes: 5602794
num_examples: 152
- name: test
num_bytes: 8155880
num_examples: 152
download_size: 34937248
dataset_size: 220215026
---
# Dataset Card for "speech_chatgpt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-741567-2252771791 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-xl-16384-book-summary
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-xl-16384-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
Megnis/MMLU_dataset_LlaMa_style | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 6962332
num_examples: 14042
- name: validation
num_bytes: 762896
num_examples: 1531
- name: dev
num_bytes: 125288
num_examples: 285
- name: auxiliary_train
num_bytes: 162697939
num_examples: 99842
download_size: 50617642
dataset_size: 170548455
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: dev
path: data/dev-*
- split: auxiliary_train
path: data/auxiliary_train-*
---
|
open-llm-leaderboard/details_INSAIT-Institute__BgGPT-7B-Instruct-v0.1 | ---
pretty_name: Evaluation run of INSAIT-Institute/BgGPT-7B-Instruct-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [INSAIT-Institute/BgGPT-7B-Instruct-v0.1](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_INSAIT-Institute__BgGPT-7B-Instruct-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-18T17:19:30.595727](https://huggingface.co/datasets/open-llm-leaderboard/details_INSAIT-Institute__BgGPT-7B-Instruct-v0.1/blob/main/results_2024-02-18T17-19-30.595727.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5988578948188136,\n\
\ \"acc_stderr\": 0.03314721162029684,\n \"acc_norm\": 0.6004390874684334,\n\
\ \"acc_norm_stderr\": 0.03381955360558513,\n \"mc1\": 0.37576499388004897,\n\
\ \"mc1_stderr\": 0.016954584060214297,\n \"mc2\": 0.5368100172224678,\n\
\ \"mc2_stderr\": 0.015301815507488754\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.568259385665529,\n \"acc_stderr\": 0.014474591427196202,\n\
\ \"acc_norm\": 0.6023890784982935,\n \"acc_norm_stderr\": 0.01430175222327954\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6171081457876917,\n\
\ \"acc_stderr\": 0.004850988215167536,\n \"acc_norm\": 0.8159729137621987,\n\
\ \"acc_norm_stderr\": 0.003867143274914471\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.038424985593952694,\n\
\ \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.038424985593952694\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n\
\ \"acc_stderr\": 0.0498887651569859,\n \"acc_norm\": 0.56,\n \
\ \"acc_norm_stderr\": 0.0498887651569859\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.028985455652334395,\n\
\ \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.028985455652334395\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n\
\ \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n\
\ \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n\
\ \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.5895953757225434,\n\
\ \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3137254901960784,\n \"acc_stderr\": 0.04617034827006716,\n\
\ \"acc_norm\": 0.3137254901960784,\n \"acc_norm_stderr\": 0.04617034827006716\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n\
\ \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n\
\ \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n\
\ \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n\
\ \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42857142857142855,\n \"acc_stderr\": 0.025487187147859372,\n \"\
acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025487187147859372\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n\
\ \"acc_stderr\": 0.04216370213557836,\n \"acc_norm\": 0.3333333333333333,\n\
\ \"acc_norm_stderr\": 0.04216370213557836\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7193548387096774,\n \"acc_stderr\": 0.02556060472102289,\n \"\
acc_norm\": 0.7193548387096774,\n \"acc_norm_stderr\": 0.02556060472102289\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.45320197044334976,\n \"acc_stderr\": 0.03502544650845872,\n \"\
acc_norm\": 0.45320197044334976,\n \"acc_norm_stderr\": 0.03502544650845872\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\"\
: 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n\
\ \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"\
acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.025787723180723875,\n\
\ \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.025787723180723875\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5871794871794872,\n \"acc_stderr\": 0.0249626835643318,\n \
\ \"acc_norm\": 0.5871794871794872,\n \"acc_norm_stderr\": 0.0249626835643318\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948496,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948496\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5882352941176471,\n \"acc_stderr\": 0.03196876989195778,\n \
\ \"acc_norm\": 0.5882352941176471,\n \"acc_norm_stderr\": 0.03196876989195778\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7743119266055046,\n \"acc_stderr\": 0.017923087667803064,\n \"\
acc_norm\": 0.7743119266055046,\n \"acc_norm_stderr\": 0.017923087667803064\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4537037037037037,\n \"acc_stderr\": 0.03395322726375797,\n \"\
acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.03395322726375797\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640763,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640763\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \
\ \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n\
\ \"acc_stderr\": 0.03149384670994131,\n \"acc_norm\": 0.672645739910314,\n\
\ \"acc_norm_stderr\": 0.03149384670994131\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.04039314978724561,\n\
\ \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.04039314978724561\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7355371900826446,\n \"acc_stderr\": 0.040261875275912046,\n \"\
acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.040261875275912046\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7314814814814815,\n\
\ \"acc_stderr\": 0.042844679680521934,\n \"acc_norm\": 0.7314814814814815,\n\
\ \"acc_norm_stderr\": 0.042844679680521934\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.043546310772605956,\n\
\ \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.043546310772605956\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n\
\ \"acc_stderr\": 0.023365051491753722,\n \"acc_norm\": 0.8504273504273504,\n\
\ \"acc_norm_stderr\": 0.023365051491753722\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7867177522349936,\n\
\ \"acc_stderr\": 0.014648172749593513,\n \"acc_norm\": 0.7867177522349936,\n\
\ \"acc_norm_stderr\": 0.014648172749593513\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6791907514450867,\n \"acc_stderr\": 0.025131000233647886,\n\
\ \"acc_norm\": 0.6791907514450867,\n \"acc_norm_stderr\": 0.025131000233647886\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.293854748603352,\n\
\ \"acc_stderr\": 0.015235075776719616,\n \"acc_norm\": 0.293854748603352,\n\
\ \"acc_norm_stderr\": 0.015235075776719616\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.026787453111906497,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.026787453111906497\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n\
\ \"acc_stderr\": 0.026795422327893934,\n \"acc_norm\": 0.6655948553054662,\n\
\ \"acc_norm_stderr\": 0.026795422327893934\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.02604176620271716,\n\
\ \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.02604176620271716\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.45390070921985815,\n \"acc_stderr\": 0.029700453247291463,\n \
\ \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.029700453247291463\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n\
\ \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n\
\ \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5980392156862745,\n \"acc_stderr\": 0.019835176484375393,\n \
\ \"acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.019835176484375393\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.673469387755102,\n \"acc_stderr\": 0.03002105623844031,\n\
\ \"acc_norm\": 0.673469387755102,\n \"acc_norm_stderr\": 0.03002105623844031\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n\
\ \"acc_stderr\": 0.02740385941078684,\n \"acc_norm\": 0.8159203980099502,\n\
\ \"acc_norm_stderr\": 0.02740385941078684\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536955,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536955\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n\
\ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533193,\n\
\ \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533193\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.37576499388004897,\n\
\ \"mc1_stderr\": 0.016954584060214297,\n \"mc2\": 0.5368100172224678,\n\
\ \"mc2_stderr\": 0.015301815507488754\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838234\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5670962850644428,\n \
\ \"acc_stderr\": 0.013647916362576047\n }\n}\n```"
repo_url: https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|arc:challenge|25_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|gsm8k|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hellaswag|10_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T17-19-30.595727.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-18T17-19-30.595727.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- '**/details_harness|winogrande|5_2024-02-18T17-19-30.595727.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-18T17-19-30.595727.parquet'
- config_name: results
data_files:
- split: 2024_02_18T17_19_30.595727
path:
- results_2024-02-18T17-19-30.595727.parquet
- split: latest
path:
- results_2024-02-18T17-19-30.595727.parquet
---
# Dataset Card for Evaluation run of INSAIT-Institute/BgGPT-7B-Instruct-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [INSAIT-Institute/BgGPT-7B-Instruct-v0.1](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_INSAIT-Institute__BgGPT-7B-Instruct-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-18T17:19:30.595727](https://huggingface.co/datasets/open-llm-leaderboard/details_INSAIT-Institute__BgGPT-7B-Instruct-v0.1/blob/main/results_2024-02-18T17-19-30.595727.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5988578948188136,
"acc_stderr": 0.03314721162029684,
"acc_norm": 0.6004390874684334,
"acc_norm_stderr": 0.03381955360558513,
"mc1": 0.37576499388004897,
"mc1_stderr": 0.016954584060214297,
"mc2": 0.5368100172224678,
"mc2_stderr": 0.015301815507488754
},
"harness|arc:challenge|25": {
"acc": 0.568259385665529,
"acc_stderr": 0.014474591427196202,
"acc_norm": 0.6023890784982935,
"acc_norm_stderr": 0.01430175222327954
},
"harness|hellaswag|10": {
"acc": 0.6171081457876917,
"acc_stderr": 0.004850988215167536,
"acc_norm": 0.8159729137621987,
"acc_norm_stderr": 0.003867143274914471
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.038424985593952694,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.038424985593952694
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.0498887651569859,
"acc_norm": 0.56,
"acc_norm_stderr": 0.0498887651569859
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6679245283018868,
"acc_stderr": 0.028985455652334395,
"acc_norm": 0.6679245283018868,
"acc_norm_stderr": 0.028985455652334395
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5895953757225434,
"acc_stderr": 0.03750757044895537,
"acc_norm": 0.5895953757225434,
"acc_norm_stderr": 0.03750757044895537
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3137254901960784,
"acc_stderr": 0.04617034827006716,
"acc_norm": 0.3137254901960784,
"acc_norm_stderr": 0.04617034827006716
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5106382978723404,
"acc_stderr": 0.03267862331014063,
"acc_norm": 0.5106382978723404,
"acc_norm_stderr": 0.03267862331014063
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859372,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859372
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04216370213557836,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04216370213557836
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7193548387096774,
"acc_stderr": 0.02556060472102289,
"acc_norm": 0.7193548387096774,
"acc_norm_stderr": 0.02556060472102289
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.45320197044334976,
"acc_stderr": 0.03502544650845872,
"acc_norm": 0.45320197044334976,
"acc_norm_stderr": 0.03502544650845872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533086,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533086
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8497409326424871,
"acc_stderr": 0.025787723180723875,
"acc_norm": 0.8497409326424871,
"acc_norm_stderr": 0.025787723180723875
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5871794871794872,
"acc_stderr": 0.0249626835643318,
"acc_norm": 0.5871794871794872,
"acc_norm_stderr": 0.0249626835643318
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948496,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948496
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5882352941176471,
"acc_stderr": 0.03196876989195778,
"acc_norm": 0.5882352941176471,
"acc_norm_stderr": 0.03196876989195778
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7743119266055046,
"acc_stderr": 0.017923087667803064,
"acc_norm": 0.7743119266055046,
"acc_norm_stderr": 0.017923087667803064
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4537037037037037,
"acc_stderr": 0.03395322726375797,
"acc_norm": 0.4537037037037037,
"acc_norm_stderr": 0.03395322726375797
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640763,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640763
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7763713080168776,
"acc_stderr": 0.027123298205229966,
"acc_norm": 0.7763713080168776,
"acc_norm_stderr": 0.027123298205229966
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.672645739910314,
"acc_stderr": 0.03149384670994131,
"acc_norm": 0.672645739910314,
"acc_norm_stderr": 0.03149384670994131
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.04039314978724561,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.04039314978724561
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.040261875275912046,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.040261875275912046
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.042844679680521934,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.042844679680521934
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.043546310772605956,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.043546310772605956
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753722,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753722
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7867177522349936,
"acc_stderr": 0.014648172749593513,
"acc_norm": 0.7867177522349936,
"acc_norm_stderr": 0.014648172749593513
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6791907514450867,
"acc_stderr": 0.025131000233647886,
"acc_norm": 0.6791907514450867,
"acc_norm_stderr": 0.025131000233647886
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.293854748603352,
"acc_stderr": 0.015235075776719616,
"acc_norm": 0.293854748603352,
"acc_norm_stderr": 0.015235075776719616
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.026787453111906497,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.026787453111906497
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6655948553054662,
"acc_stderr": 0.026795422327893934,
"acc_norm": 0.6655948553054662,
"acc_norm_stderr": 0.026795422327893934
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.02604176620271716,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.02604176620271716
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.029700453247291463,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.029700453247291463
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4511082138200782,
"acc_stderr": 0.012709037347346233,
"acc_norm": 0.4511082138200782,
"acc_norm_stderr": 0.012709037347346233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5980392156862745,
"acc_stderr": 0.019835176484375393,
"acc_norm": 0.5980392156862745,
"acc_norm_stderr": 0.019835176484375393
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.673469387755102,
"acc_stderr": 0.03002105623844031,
"acc_norm": 0.673469387755102,
"acc_norm_stderr": 0.03002105623844031
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8159203980099502,
"acc_stderr": 0.02740385941078684,
"acc_norm": 0.8159203980099502,
"acc_norm_stderr": 0.02740385941078684
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4819277108433735,
"acc_stderr": 0.038899512528272166,
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533193,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533193
},
"harness|truthfulqa:mc|0": {
"mc1": 0.37576499388004897,
"mc1_stderr": 0.016954584060214297,
"mc2": 0.5368100172224678,
"mc2_stderr": 0.015301815507488754
},
"harness|winogrande|5": {
"acc": 0.7703235990528808,
"acc_stderr": 0.011821645601838234
},
"harness|gsm8k|5": {
"acc": 0.5670962850644428,
"acc_stderr": 0.013647916362576047
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Whispering-GPT/two-minute-papers | ---
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: CHANNEL_NAME
dtype: string
- name: URL
dtype: string
- name: TITLE
dtype: string
- name: DESCRIPTION
dtype: string
- name: TRANSCRIPTION
dtype: string
- name: SEGMENTS
dtype: string
splits:
- name: train
num_bytes: 10435074
num_examples: 737
download_size: 4626170
dataset_size: 10435074
tags:
- whisper
- whispering
- base
---
# Dataset Card for "two-minute-papers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
111wwwww/DWDSCC | ---
license: openrail
---
|
kaleemWaheed/twitter_dataset_1713087888 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 10034
num_examples: 23
download_size: 9879
dataset_size: 10034
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/abukuma_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of abukuma/阿武隈/阿武隈 (Kantai Collection)
This is the dataset of abukuma/阿武隈/阿武隈 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `blonde_hair, long_hair, blue_eyes, double_bun, hair_bun, hair_rings, twintails, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 476.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/abukuma_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 311.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/abukuma_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1134 | 657.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/abukuma_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 440.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/abukuma_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1134 | 870.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/abukuma_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/abukuma_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, bike_shorts, black_gloves, black_jacket, blush, grey_sailor_collar, grey_skirt, looking_at_viewer, neck_ribbon, one-hour_drawing_challenge, pleated_skirt, red_ribbon, serafuku, short_sleeves, shorts_under_skirt, simple_background, solo, twitter_username, white_background, open_mouth, partially_fingerless_gloves, breasts, smile |
| 1 | 10 |  |  |  |  |  | 1girl, bike_shorts, black_gloves, black_jacket, grey_sailor_collar, grey_skirt, neck_ribbon, partially_fingerless_gloves, pleated_skirt, red_ribbon, serafuku, short_sleeves, shorts_under_skirt, solo, white_background, cowboy_shot, simple_background, looking_at_viewer, bangs |
| 2 | 6 |  |  |  |  |  | 1girl, black_jacket, grey_sailor_collar, grey_skirt, looking_at_viewer, neck_ribbon, pleated_skirt, red_ribbon, serafuku, short_sleeves, simple_background, solo, twitter_username, white_background, bike_shorts, cowboy_shot, dated, shorts_under_skirt, one-hour_drawing_challenge |
| 3 | 6 |  |  |  |  |  | 1girl, bike_shorts, black_footwear, black_gloves, black_jacket, grey_sailor_collar, grey_skirt, neck_ribbon, partially_fingerless_gloves, pleated_skirt, red_ribbon, serafuku, short_sleeves, shorts_under_skirt, solo, bangs, full_body, simple_background, knee_boots, standing, white_background, open_mouth, smile |
| 4 | 5 |  |  |  |  |  | 1girl, black_gloves, black_jacket, grey_sailor_collar, grey_skirt, looking_at_viewer, neck_ribbon, partially_fingerless_gloves, pleated_skirt, red_ribbon, serafuku, short_sleeves, solo, open_mouth, upper_body, bangs, blush, simple_background, white_background |
| 5 | 11 |  |  |  |  |  | 1girl, black_gloves, black_jacket, grey_sailor_collar, neck_ribbon, partially_fingerless_gloves, red_ribbon, serafuku, short_sleeves, solo, upper_body, white_background, simple_background, bangs, looking_at_viewer, smile, open_mouth |
| 6 | 16 |  |  |  |  |  | 1girl, bike_shorts, serafuku, solo, looking_at_viewer, pleated_skirt, black_gloves, open_mouth, smile, blush, fingerless_gloves |
| 7 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, serafuku, solo, pleated_skirt, blush, open_mouth, white_background |
| 8 | 27 |  |  |  |  |  | 1girl, solo, looking_at_viewer, black_sweater, pleated_skirt, simple_background, white_background, smile, bag, black_pantyhose, hooded_jacket, open_mouth, cowboy_shot, long_sleeves, official_alternate_costume |
| 9 | 8 |  |  |  |  |  | 1girl, small_breasts, solo, white_background, white_bikini, cowboy_shot, side-tie_bikini_bottom, blush, looking_at_viewer, navel, front-tie_top, simple_background |
| 10 | 6 |  |  |  |  |  | detached_collar, playboy_bunny, wrist_cuffs, 1girl, black_pantyhose, fake_animal_ears, rabbit_ears, simple_background, small_breasts, solo, strapless_leotard, white_background, black_leotard, blush, bowtie, dated, rabbit_tail, fake_tail, looking_at_viewer, open_mouth, red_ribbon |
| 11 | 6 |  |  |  |  |  | 1girl, red_capelet, red_skirt, black_gloves, little_red_riding_hood_(grimm)_(cosplay), official_alternate_costume, solo, white_shirt, corset, frilled_skirt, hair_down, hooded_capelet, open_mouth, red_hood, alternate_hairstyle, cowboy_shot, looking_at_viewer, smile, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bike_shorts | black_gloves | black_jacket | blush | grey_sailor_collar | grey_skirt | looking_at_viewer | neck_ribbon | one-hour_drawing_challenge | pleated_skirt | red_ribbon | serafuku | short_sleeves | shorts_under_skirt | simple_background | solo | twitter_username | white_background | open_mouth | partially_fingerless_gloves | breasts | smile | cowboy_shot | bangs | dated | black_footwear | full_body | knee_boots | standing | upper_body | fingerless_gloves | black_sweater | bag | black_pantyhose | hooded_jacket | long_sleeves | official_alternate_costume | small_breasts | white_bikini | side-tie_bikini_bottom | navel | front-tie_top | detached_collar | playboy_bunny | wrist_cuffs | fake_animal_ears | rabbit_ears | strapless_leotard | black_leotard | bowtie | rabbit_tail | fake_tail | red_capelet | red_skirt | little_red_riding_hood_(grimm)_(cosplay) | white_shirt | corset | frilled_skirt | hair_down | hooded_capelet | red_hood | alternate_hairstyle | white_thighhighs |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:---------------|:---------------|:--------|:---------------------|:-------------|:--------------------|:--------------|:-----------------------------|:----------------|:-------------|:-----------|:----------------|:---------------------|:--------------------|:-------|:-------------------|:-------------------|:-------------|:------------------------------|:----------|:--------|:--------------|:--------|:--------|:-----------------|:------------|:-------------|:-----------|:-------------|:--------------------|:----------------|:------|:------------------|:----------------|:---------------|:-----------------------------|:----------------|:---------------|:-------------------------|:--------|:----------------|:------------------|:----------------|:--------------|:-------------------|:--------------|:--------------------|:----------------|:---------|:--------------|:------------|:--------------|:------------|:-------------------------------------------|:--------------|:---------|:----------------|:------------|:-----------------|:-----------|:----------------------|:-------------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | X | X | X | X | X | X | X | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | X | X | X | | X | X | | X | | X | X | X | X | X | X | X | | X | X | X | | X | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | X | X | X | X | X | X | | X | X | X | X | | X | X | | X | X | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 11 |  |  |  |  |  | X | | X | X | | X | | X | X | | | X | X | X | | X | X | | X | X | X | | X | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 16 |  |  |  |  |  | X | X | X | | X | | | X | | | X | | X | | | | X | | | X | | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | | | | X | | | X | | | X | | X | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 27 |  |  |  |  |  | X | | | | | | | X | | | X | | | | | X | X | | X | X | | | X | X | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 8 |  |  |  |  |  | X | | | | X | | | X | | | | | | | | X | X | | X | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 10 | 6 |  |  |  |  |  | X | | | | X | | | X | | | | X | | | | X | X | | X | X | | | | | | X | | | | | | | | | X | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | | X | | | | | X | | | | | | | | | X | | | X | | | X | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_BFauber__opt125m_10e5_50ep | ---
pretty_name: Evaluation run of BFauber/opt125m_10e5_50ep
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BFauber/opt125m_10e5_50ep](https://huggingface.co/BFauber/opt125m_10e5_50ep)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BFauber__opt125m_10e5_50ep\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-02T19:50:08.433413](https://huggingface.co/datasets/open-llm-leaderboard/details_BFauber__opt125m_10e5_50ep/blob/main/results_2024-02-02T19-50-08.433413.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.23829067580904603,\n\
\ \"acc_stderr\": 0.030168081539566568,\n \"acc_norm\": 0.2383084921515475,\n\
\ \"acc_norm_stderr\": 0.030960161901202803,\n \"mc1\": 0.24724602203182375,\n\
\ \"mc1_stderr\": 0.015102404797359652,\n \"mc2\": 0.4829960000266921,\n\
\ \"mc2_stderr\": 0.01598626138943452\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.22013651877133106,\n \"acc_stderr\": 0.012108124883460983,\n\
\ \"acc_norm\": 0.23890784982935154,\n \"acc_norm_stderr\": 0.012461071376316614\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.27106154152559253,\n\
\ \"acc_stderr\": 0.004435993492583855,\n \"acc_norm\": 0.28978291177056364,\n\
\ \"acc_norm_stderr\": 0.0045273436511308095\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.31851851851851853,\n\
\ \"acc_stderr\": 0.040247784019771096,\n \"acc_norm\": 0.31851851851851853,\n\
\ \"acc_norm_stderr\": 0.040247784019771096\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.36,\n\
\ \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \
\ \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.20754716981132076,\n \"acc_stderr\": 0.02495991802891127,\n\
\ \"acc_norm\": 0.20754716981132076,\n \"acc_norm_stderr\": 0.02495991802891127\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.20833333333333334,\n\
\ \"acc_stderr\": 0.033961162058453336,\n \"acc_norm\": 0.20833333333333334,\n\
\ \"acc_norm_stderr\": 0.033961162058453336\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.2,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n\
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2254335260115607,\n\
\ \"acc_stderr\": 0.03186209851641144,\n \"acc_norm\": 0.2254335260115607,\n\
\ \"acc_norm_stderr\": 0.03186209851641144\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.25957446808510637,\n \"acc_stderr\": 0.02865917937429232,\n\
\ \"acc_norm\": 0.25957446808510637,\n \"acc_norm_stderr\": 0.02865917937429232\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n\
\ \"acc_stderr\": 0.0404933929774814,\n \"acc_norm\": 0.24561403508771928,\n\
\ \"acc_norm_stderr\": 0.0404933929774814\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.31724137931034485,\n \"acc_stderr\": 0.03878352372138623,\n\
\ \"acc_norm\": 0.31724137931034485,\n \"acc_norm_stderr\": 0.03878352372138623\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2275132275132275,\n \"acc_stderr\": 0.021591269407823792,\n \"\
acc_norm\": 0.2275132275132275,\n \"acc_norm_stderr\": 0.021591269407823792\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1746031746031746,\n\
\ \"acc_stderr\": 0.03395490020856112,\n \"acc_norm\": 0.1746031746031746,\n\
\ \"acc_norm_stderr\": 0.03395490020856112\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.1870967741935484,\n \"acc_stderr\": 0.022185710092252252,\n \"\
acc_norm\": 0.1870967741935484,\n \"acc_norm_stderr\": 0.022185710092252252\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.21674876847290642,\n \"acc_stderr\": 0.02899033125251624,\n \"\
acc_norm\": 0.21674876847290642,\n \"acc_norm_stderr\": 0.02899033125251624\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\"\
: 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.22424242424242424,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.22424242424242424,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.1919191919191919,\n \"acc_stderr\": 0.02805779167298902,\n \"\
acc_norm\": 0.1919191919191919,\n \"acc_norm_stderr\": 0.02805779167298902\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2153846153846154,\n \"acc_stderr\": 0.020843034557462878,\n\
\ \"acc_norm\": 0.2153846153846154,\n \"acc_norm_stderr\": 0.020843034557462878\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24444444444444444,\n \"acc_stderr\": 0.026202766534652148,\n \
\ \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.026202766534652148\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.19747899159663865,\n \"acc_stderr\": 0.025859164122051467,\n\
\ \"acc_norm\": 0.19747899159663865,\n \"acc_norm_stderr\": 0.025859164122051467\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436777,\n \"\
acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436777\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.21651376146788992,\n \"acc_stderr\": 0.017658710594443135,\n \"\
acc_norm\": 0.21651376146788992,\n \"acc_norm_stderr\": 0.017658710594443135\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"\
acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.28921568627450983,\n \"acc_stderr\": 0.03182231867647554,\n \"\
acc_norm\": 0.28921568627450983,\n \"acc_norm_stderr\": 0.03182231867647554\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2616033755274262,\n \"acc_stderr\": 0.028609516716994934,\n \
\ \"acc_norm\": 0.2616033755274262,\n \"acc_norm_stderr\": 0.028609516716994934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.28699551569506726,\n\
\ \"acc_stderr\": 0.030360379710291947,\n \"acc_norm\": 0.28699551569506726,\n\
\ \"acc_norm_stderr\": 0.030360379710291947\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n\
\ \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2396694214876033,\n \"acc_stderr\": 0.03896878985070417,\n \"\
acc_norm\": 0.2396694214876033,\n \"acc_norm_stderr\": 0.03896878985070417\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2147239263803681,\n \"acc_stderr\": 0.03226219377286774,\n\
\ \"acc_norm\": 0.2147239263803681,\n \"acc_norm_stderr\": 0.03226219377286774\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n\
\ \"acc_stderr\": 0.04464285714285713,\n \"acc_norm\": 0.33035714285714285,\n\
\ \"acc_norm_stderr\": 0.04464285714285713\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n\
\ \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2863247863247863,\n\
\ \"acc_stderr\": 0.029614323690456648,\n \"acc_norm\": 0.2863247863247863,\n\
\ \"acc_norm_stderr\": 0.029614323690456648\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26181353767560667,\n\
\ \"acc_stderr\": 0.01572083867844526,\n \"acc_norm\": 0.26181353767560667,\n\
\ \"acc_norm_stderr\": 0.01572083867844526\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.22793296089385476,\n\
\ \"acc_stderr\": 0.014030149950805097,\n \"acc_norm\": 0.22793296089385476,\n\
\ \"acc_norm_stderr\": 0.014030149950805097\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.02428861946604611,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.02428861946604611\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.18971061093247588,\n\
\ \"acc_stderr\": 0.022268196258783228,\n \"acc_norm\": 0.18971061093247588,\n\
\ \"acc_norm_stderr\": 0.022268196258783228\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n\
\ \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.23049645390070922,\n \"acc_stderr\": 0.025123739226872405,\n \
\ \"acc_norm\": 0.23049645390070922,\n \"acc_norm_stderr\": 0.025123739226872405\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n\
\ \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n\
\ \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.1801470588235294,\n \"acc_stderr\": 0.02334516361654485,\n\
\ \"acc_norm\": 0.1801470588235294,\n \"acc_norm_stderr\": 0.02334516361654485\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.25,\n \"acc_stderr\": 0.01751781884501444,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.01751781884501444\n },\n \"harness|hendrycksTest-public_relations|5\"\
: {\n \"acc\": 0.23636363636363636,\n \"acc_stderr\": 0.04069306319721378,\n\
\ \"acc_norm\": 0.23636363636363636,\n \"acc_norm_stderr\": 0.04069306319721378\n\
\ },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n\
\ \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n\
\ \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\"\
: {\n \"acc\": 0.24378109452736318,\n \"acc_stderr\": 0.03036049015401465,\n\
\ \"acc_norm\": 0.24378109452736318,\n \"acc_norm_stderr\": 0.03036049015401465\n\
\ },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\":\
\ 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n\
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-virology|5\"\
: {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n\
\ \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n\
\ },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.29239766081871343,\n\
\ \"acc_stderr\": 0.03488647713457921,\n \"acc_norm\": 0.29239766081871343,\n\
\ \"acc_norm_stderr\": 0.03488647713457921\n },\n \"harness|truthfulqa:mc|0\"\
: {\n \"mc1\": 0.24724602203182375,\n \"mc1_stderr\": 0.015102404797359652,\n\
\ \"mc2\": 0.4829960000266921,\n \"mc2_stderr\": 0.01598626138943452\n\
\ },\n \"harness|winogrande|5\": {\n \"acc\": 0.5130228887134964,\n\
\ \"acc_stderr\": 0.014047718393997667\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```"
repo_url: https://huggingface.co/BFauber/opt125m_10e5_50ep
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|arc:challenge|25_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|gsm8k|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hellaswag|10_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-50-08.433413.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-02T19-50-08.433413.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- '**/details_harness|winogrande|5_2024-02-02T19-50-08.433413.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-02T19-50-08.433413.parquet'
- config_name: results
data_files:
- split: 2024_02_02T19_50_08.433413
path:
- results_2024-02-02T19-50-08.433413.parquet
- split: latest
path:
- results_2024-02-02T19-50-08.433413.parquet
---
# Dataset Card for Evaluation run of BFauber/opt125m_10e5_50ep
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BFauber/opt125m_10e5_50ep](https://huggingface.co/BFauber/opt125m_10e5_50ep) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BFauber__opt125m_10e5_50ep",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-02T19:50:08.433413](https://huggingface.co/datasets/open-llm-leaderboard/details_BFauber__opt125m_10e5_50ep/blob/main/results_2024-02-02T19-50-08.433413.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.23829067580904603,
"acc_stderr": 0.030168081539566568,
"acc_norm": 0.2383084921515475,
"acc_norm_stderr": 0.030960161901202803,
"mc1": 0.24724602203182375,
"mc1_stderr": 0.015102404797359652,
"mc2": 0.4829960000266921,
"mc2_stderr": 0.01598626138943452
},
"harness|arc:challenge|25": {
"acc": 0.22013651877133106,
"acc_stderr": 0.012108124883460983,
"acc_norm": 0.23890784982935154,
"acc_norm_stderr": 0.012461071376316614
},
"harness|hellaswag|10": {
"acc": 0.27106154152559253,
"acc_stderr": 0.004435993492583855,
"acc_norm": 0.28978291177056364,
"acc_norm_stderr": 0.0045273436511308095
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.040247784019771096,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.040247784019771096
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.20754716981132076,
"acc_stderr": 0.02495991802891127,
"acc_norm": 0.20754716981132076,
"acc_norm_stderr": 0.02495991802891127
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.20833333333333334,
"acc_stderr": 0.033961162058453336,
"acc_norm": 0.20833333333333334,
"acc_norm_stderr": 0.033961162058453336
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.21,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.21,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2254335260115607,
"acc_stderr": 0.03186209851641144,
"acc_norm": 0.2254335260115607,
"acc_norm_stderr": 0.03186209851641144
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.25957446808510637,
"acc_stderr": 0.02865917937429232,
"acc_norm": 0.25957446808510637,
"acc_norm_stderr": 0.02865917937429232
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.0404933929774814,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.0404933929774814
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.31724137931034485,
"acc_stderr": 0.03878352372138623,
"acc_norm": 0.31724137931034485,
"acc_norm_stderr": 0.03878352372138623
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2275132275132275,
"acc_stderr": 0.021591269407823792,
"acc_norm": 0.2275132275132275,
"acc_norm_stderr": 0.021591269407823792
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.1746031746031746,
"acc_stderr": 0.03395490020856112,
"acc_norm": 0.1746031746031746,
"acc_norm_stderr": 0.03395490020856112
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1870967741935484,
"acc_stderr": 0.022185710092252252,
"acc_norm": 0.1870967741935484,
"acc_norm_stderr": 0.022185710092252252
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.21674876847290642,
"acc_stderr": 0.02899033125251624,
"acc_norm": 0.21674876847290642,
"acc_norm_stderr": 0.02899033125251624
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.22424242424242424,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.22424242424242424,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.1919191919191919,
"acc_stderr": 0.02805779167298902,
"acc_norm": 0.1919191919191919,
"acc_norm_stderr": 0.02805779167298902
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2153846153846154,
"acc_stderr": 0.020843034557462878,
"acc_norm": 0.2153846153846154,
"acc_norm_stderr": 0.020843034557462878
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.026202766534652148,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.026202766534652148
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.19747899159663865,
"acc_stderr": 0.025859164122051467,
"acc_norm": 0.19747899159663865,
"acc_norm_stderr": 0.025859164122051467
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436777,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436777
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.21651376146788992,
"acc_stderr": 0.017658710594443135,
"acc_norm": 0.21651376146788992,
"acc_norm_stderr": 0.017658710594443135
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.28921568627450983,
"acc_stderr": 0.03182231867647554,
"acc_norm": 0.28921568627450983,
"acc_norm_stderr": 0.03182231867647554
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2616033755274262,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.2616033755274262,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.28699551569506726,
"acc_stderr": 0.030360379710291947,
"acc_norm": 0.28699551569506726,
"acc_norm_stderr": 0.030360379710291947
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2396694214876033,
"acc_stderr": 0.03896878985070417,
"acc_norm": 0.2396694214876033,
"acc_norm_stderr": 0.03896878985070417
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2147239263803681,
"acc_stderr": 0.03226219377286774,
"acc_norm": 0.2147239263803681,
"acc_norm_stderr": 0.03226219377286774
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.33035714285714285,
"acc_stderr": 0.04464285714285713,
"acc_norm": 0.33035714285714285,
"acc_norm_stderr": 0.04464285714285713
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2863247863247863,
"acc_stderr": 0.029614323690456648,
"acc_norm": 0.2863247863247863,
"acc_norm_stderr": 0.029614323690456648
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26181353767560667,
"acc_stderr": 0.01572083867844526,
"acc_norm": 0.26181353767560667,
"acc_norm_stderr": 0.01572083867844526
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.22793296089385476,
"acc_stderr": 0.014030149950805097,
"acc_norm": 0.22793296089385476,
"acc_norm_stderr": 0.014030149950805097
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.02428861946604611,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.02428861946604611
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.18971061093247588,
"acc_stderr": 0.022268196258783228,
"acc_norm": 0.18971061093247588,
"acc_norm_stderr": 0.022268196258783228
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23049645390070922,
"acc_stderr": 0.025123739226872405,
"acc_norm": 0.23049645390070922,
"acc_norm_stderr": 0.025123739226872405
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.1801470588235294,
"acc_stderr": 0.02334516361654485,
"acc_norm": 0.1801470588235294,
"acc_norm_stderr": 0.02334516361654485
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.25,
"acc_stderr": 0.01751781884501444,
"acc_norm": 0.25,
"acc_norm_stderr": 0.01751781884501444
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.23636363636363636,
"acc_stderr": 0.04069306319721378,
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.04069306319721378
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-virology|5": {
"acc": 0.28313253012048195,
"acc_stderr": 0.03507295431370518,
"acc_norm": 0.28313253012048195,
"acc_norm_stderr": 0.03507295431370518
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.29239766081871343,
"acc_stderr": 0.03488647713457921,
"acc_norm": 0.29239766081871343,
"acc_norm_stderr": 0.03488647713457921
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24724602203182375,
"mc1_stderr": 0.015102404797359652,
"mc2": 0.4829960000266921,
"mc2_stderr": 0.01598626138943452
},
"harness|winogrande|5": {
"acc": 0.5130228887134964,
"acc_stderr": 0.014047718393997667
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Fhrozen/dcase22_task3 | ---
license: mit
annotations_creators:
- unknown
language_creators:
- unknown
size_categories:
- 100K<n<100M
source_datasets:
- unknown
task_categories:
- audio-classification
task_ids:
- slot-filling
---
# DCASE 2022 Task 3 Data sets: STARSS22 Dataset + Synthetic SELD mixtures
[Audio Research Group / Tampere University](https://webpages.tuni.fi/arg/)
[Creative AI Lab/ SONY R&D Center](https://www.sony.com/en/SonyInfo/research/research-areas/audio-acoustics/)
## Important
**This is a copy from the Zenodo Original one**
AUTHORS
**Tampere University**
- Archontis Politis ([contact](mailto:archontis.politis@tuni.fi), [profile](https://scholar.google.fi/citations?user=DuCqB3sAAAAJ&hl=en))
- Parthasaarathy Sudarsanam([contact](mailto:parthasaarathy.ariyakulamsudarsanam@tuni.fi), [profile](https://scholar.google.com/citations?user=yxZ1qAIAAAAJ&hl=en))
- Sharath Adavanne ([contact](mailto:sharath.adavanne@tuni.fi), [profile](https://www.aane.in))
- Daniel Krause ([contact](mailto:daniel.krause@tuni.fi), [profile](https://scholar.google.com/citations?user=pSLng-8AAAAJ&hl=en))
- Tuomas Virtanen ([contact](mailto:tuomas.virtanen@tuni.fi), [profile](https://homepages.tuni.fi/tuomas.virtanen/))
**SONY**
- Yuki Mitsufuji ([contact](mailto:yuhki.mitsufuji@sony.com), [profile](https://scholar.google.com/citations?user=GMytI10AAAAJ))
- Kazuki Shimada ([contact](mailto:kazuki.shimada@sony.com), [profile](https://scholar.google.com/citations?user=-t9IslAAAAAJ&hl=en))
- Naoya Takahashi ([profile](https://scholar.google.com/citations?user=JbtYJMoAAAAJ))
- Yuichiro Koyama
- Shusuke Takahashi
# Description
The **Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22)** dataset contains multichannel recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of prominent events belonging to a set of target classes. The dataset is collected in two different countries, in Tampere, Finland by the Audio Researh Group (ARG) of **Tampere University (TAU)**, and in Tokyo, Japan by **SONY**, using a similar setup and annotation procedure. The dataset is delivered in two 4-channel spatial recording formats, a microphone array one (**MIC**), and first-order Ambisonics one (**FOA**). These recordings serve as the development dataset for the [DCASE 2022 Sound Event Localization and Detection Task](https://dcase.community/challenge2022/task-sound-event-localization-and-detection) of the [DCASE 2022 Challenge](https://dcase.community/challenge2022/).
Contrary to the three previous datasets of synthetic spatial sound scenes of TAU Spatial Sound Events 2019 ([development](10.5281/zenodo.2599196)/[evaluation](10.5281/zenodo.3377088)), [TAU-NIGENS Spatial Sound Events 2020](https://doi.org/10.5281/zenodo.4064792), and [TAU-NIGENS Spatial Sound Events 2021](10.5281/zenodo.5476980
) associated with the previous iterations of the DCASE Challenge, the STARSS22 dataset contains recordings of real sound scenes and hence it avoids some of the pitfalls of synthetic generation of scenes. Some such key properties are:
- annotations are based on a combination of human annotators for sound event activity and optical tracking for spatial positions
- the annotated target event classes are determined by the composition of the real scenes
- the density, polyphony, occurences and co-occurences of events and sound classes is not random, and it follows actions and interactions of participants in the real scenes
The recordings were collected between September 2021 and February 2022. Collection of data from the TAU side has received funding from Google.
# Aim
The dataset is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.
# Recording procedure
The sound scene recordings were captured with a high-channel-count spherical microphone array ([Eigenmike em32 by mh Acoustics](https://mhacoustics.com/products)), simultaneously with a 360° video recording spatially aligned with the spherical array recording ([Ricoh Theta V](https://theta360.com/en/about/theta/v.html)). Additionally, the main sound sources of interest were equipped with tracking markers, which are tracked throughout the recording with an [Optitrack Flex 13](https://optitrack.com/cameras/flex-13/) system arranged around each scene. All scenes were based on human actors performing some actions, interacting between them and with the objects in the scene, and were by design dynamic. Since the actors were producing most of the sounds in the scene (but not all), they were additionally equipped with [DPA Wireless Go II](https://rode.com/microphones/wireless/wirelessgoii) microphones, providing close-miked recordings of the main events. Recording would start and stop according to a scene being acted, usually lasting between 1~5mins. Recording would start in all microphones and tracking devices before the beginning of the scene, and would stop right after. A clapper sound would initiate the acting and it would serve as a reference signal for synchronization between the em32 recording, the Ricoh Theta V video, the DPA wireless microphone recordings, and the Optitrack tracker data. Synchronized clips of all of them would be cropped and stored in the end of each recording session.
# Annotation procedure
By combining information from the wireless microphones, the optical tracking data, and the 360° videos, spatiotemporal annotations were extracted semi-automatically, and validated manually. More specifically, the actors were tracked all through each recording session wearing headbands with markers, and the spatial positions of other human-related sources, such as mouth, hands, or footsteps were geometrically extrapolated from those head coordinates. Additional trackers were mounted on other sources of interest (e.g. vacuum cleaner, guitar, water tap, cupboard, door handle, a.o.). Each actor had a wireless microphone mounted on their lapel, providing a clear recording of all sound events produced by that actor, and/or any independent sources closer to that actor than the rest. The temporal annotation was based primarily on those close-miked recordings. The annotators would annotate the sound event activity and label their class during the recording by listening those close-miked signals. Events that were not audible in the overall scene recording of the em32 were not annotated, even if they were audible in the lapel recordings. In ambiguous cases, the annotators could rely on the 360° video to associate an event with a certain actor or source. The final sound event temporal annotations were associated with the tracking data through the class of each sound event and the actor that produced them. All tracked Cartesian coordinates delivered by the tracker were converted to directions-of-arrival (DOAs) with respect to the coordinates of the Eigenmike. Finally, the final class, temporal, and spatial annotations were combined and converted to the challenge format. Validation of the annotations was done by observing videos of the activities of each class visualized as markers positioned at their respective DOAs on the 360° video plane, overlapped with the 360° from the Ricoh Theta V.
# Recording formats
The array response of the two recording formats can be considered known. The following theoretical spatial responses (steering vectors) modeling the two formats describe the directional response of each channel to a source incident from direction-of-arrival (DOA) given by azimuth angle $\phi$ and elevation angle $\theta$.
**For the first-order ambisonics (FOA):**
\begin{eqnarray}
H_1(\phi, \theta, f) &=& 1 \\
H_2(\phi, \theta, f) &=& \sin(\phi) * \cos(\theta) \\
H_3(\phi, \theta, f) &=& \sin(\theta) \\
H_4(\phi, \theta, f) &=& \cos(\phi) * \cos(\theta)
\end{eqnarray}
The (FOA) format is obtained by converting the 32-channel microphone array signals by means of encoding filters based on anechoic measurements of the Eigenmike array response. Note that in the formulas above the encoding format is assumed frequency-independent, something that holds true up to around 9kHz with the specific microphone array, while the actual encoded responses start to deviate gradually at higher frequencies from the ideal ones provided above.
**For the tetrahedral microphone array (MIC):**
The four microphone have the following positions, in spherical coordinates $(\phi, \theta, r)$:
\begin{eqnarray}
M1: &\quad(&45^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber\\
M2: &\quad(&-45^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M3: &\quad(&135^\circ, &-&35^\circ, &4.2\mathrm{cm})\nonumber\\
M4: &\quad(&-135^\circ, &&35^\circ, &4.2\mathrm{cm})\nonumber
\end{eqnarray}
Since the microphones are mounted on an acoustically-hard spherical baffle, an analytical expression for the directional array response is given by the expansion:
\begin{equation}
H_m(\phi_m, \theta_m, \phi, \theta, \omega) = \frac{1}{(\omega R/c)^2}\sum_{n=0}^{30} \frac{i^{n-1}}{h_n'^{(2)}(\omega R/c)}(2n+1)P_n(\cos(\gamma_m))
\end{equation}
where $m$ is the channel number, $(\phi_m, \theta_m)$ are the specific microphone's azimuth and elevation position, $\omega = 2\pi f$ is the angular frequency, $R = 0.042$m is the array radius, $c = 343$m/s is the speed of sound, $\cos(\gamma_m)$ is the cosine angle between the microphone and the DOA, and $P_n$ is the unnormalized Legendre polynomial of degree $n$, and $h_n'^{(2)}$ is the derivative with respect to the argument of a spherical Hankel function of the second kind. The expansion is limited to 30 terms which provides negligible modeling error up to 20kHz. Example routines that can generate directional frequency and impulse array responses based on the above formula can be found [here](https://github.com/polarch/Array-Response-Simulator).
# Dataset specifications
The specifications of the dataset can be summarized in the following:
- 70 recording clips of 30 sec ~ 5 min durations, with a total time of ~2hrs, contributed by SONY (development dataset).
- 51 recording clips of 1 min ~ 5 min durations, with a total time of ~3hrs, contributed by TAU (development dataset).
- A training-test split is provided for reporting results using the development dataset.
- 40 recordings contributed by SONY for the training split, captured in 2 rooms (dev-train-sony).
- 30 recordings contributed by SONY for the testing split, captured in 2 rooms (dev-test-sony).
- 27 recordings contributed by TAU for the training split, captured in 4 rooms (dev-train-tau).
- 24 recordings contributed by TAU for the testing split, captured in 3 rooms (dev-test-tau).
- A total of 11 unique rooms captured in the recordings, 4 from SONY and 7 from TAU (development set).
- Sampling rate 24kHz.
- Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array (MIC).
- Recordings are taken in two different countries and two different sites.
- Each recording clip is part of a recording session happening in a unique room.
- Groups of participants, sound making props, and scene scenarios are unique for each session (with a few exceptions).
- To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes are loosely scripted.
- 13 target classes are identified in the recordings and strongly annotated by humans.
- Spatial annotations for those active events are captured by an optical tracking system.
- Sound events out of the target classes are considered as interference.
# Sound event classes
13 target sound event classes were annotated. The classes follow loosely the [Audioset ontology](https://research.google.com/audioset/ontology/index.html).
0. Female speech, woman speaking
1. Male speech, man speaking
2. Clapping
3. Telephone
4. Laughter
5. Domestic sounds
6. Walk, footsteps
7. Door, open or close
8. Music
9. Musical instrument
10. Water tap, faucet
11. Bell
12. Knock
The content of some of these classes corresponds to events of a limited range of Audioset-related subclasses. These are detailed here as additional information on the diversity of those sound events:
- Telephone
- Mostly traditional _Telephone Bell Ringing_ and _Ringtone_ sounds, without musical ringtones.
- Domestic sounds
- Sounds of _Vacuum cleaner_
- Sounds of water boiler, closer to _Boiling_
- Sounds of air circulator, closer to _Mechanical fan_
- Door, open or close
- Combination of _Door_ and _Cupboard open or close_
- Music
- _Background music_ and _Pop music_ played by a loudspeaker in the room.
- Musical Instrument
- Acoustic guitar
- Marimba, xylophone
- Cowbell
- Piano
- Rattle (instrument)
- Bell
- Combination of sounds from hotel bell and glass bell, closer to _Bicycle bell_ and single _Chime_.
Some additional notes:
- The speech classes contain speech in a few different languages.
- There are occasionally localized sound events that are not annotated and are considered as interferers, with examples such as _computer keyboard_, _shuffling cards_, _dishes, pots, and pans_.
- There is natural background noise (e.g. HVAC noise) in all recordings, at very low levels in some and at quite high levels in others. Such mostly diffuse background noise should be distinct from other noisy target sources (e.g. vacuum cleaner, mechanical fan) since these are clearly spatially localized.
# Naming Convention (Development dataset)
The recordings in the development dataset follow the naming convention:
fold[fold number]_room[room number]_mix[recording number per room].wav
The fold number at the moment is used only to distinguish between the training and testing split. The room information is provided for the user of the dataset to potentially help understand the performance of their method with respect to different conditions.
# Reference labels and directions-of-arrival
For each recording in the development dataset, the labels and DoAs are provided in a plain text CSV file of the same filename as the recording, in the following format:
[frame number (int)], [active class index (int)], [source number index (int)], [azimuth (int)], [elevation (int)]
Frame, class, and source enumeration begins at 0. Frames correspond to a temporal resolution of 100msec. Azimuth and elevation angles are given in degrees, rounded to the closest integer value, with azimuth and elevation being zero at the front, azimuth $\phi \in [-180^{\circ}, 180^{\circ}]$, and elevation $\theta \in [-90^{\circ}, 90^{\circ}]$. Note that the azimuth angle is increasing counter-clockwise ($\phi = 90^{\circ}$ at the left).
The source index is a unique integer for each source in the scene, and it is provided only as additional information. Note that each unique actor gets assigned one such identifier, but not individual events produced by the same actor; e.g. a _clapping_ event and a _laughter_ event produced by the same person have the same identifier. Independent sources that are not actors (e.g. a loudspeaker playing music in the room) get a 0 identifier. Note that source identifier information is only included in the development metadata and is not required to be provided by the participants in their results.
Overlapping sound events are indicated with duplicate frame numbers, and can belong to a different or the same class. An example sequence could be as:
10, 1, 1, -50, 30
11, 1, 1, -50, 30
11, 1, 2, 10, -20
12, 1, 2, 10, -20
13, 1, 2, 10, -20
13, 8, 0, -40, 0
which describes that in frame 10-11, an event of class _male speech_ (_class 1_) belonging to one actor (_source 1_) is active at direction (-50°,30°). However, at frame 11 a second instance of the same class appears simultaneously at a different direction (10°,-20°) belonging to another actor (_source 2_), while at frame 13 an additional event of class _music_ (_class 8_) appears belonging to a non-actor source (_source 0_). Frames that contain no sound events are not included in the sequence.
# Task setup
The dataset is associated with the [DCASE 2022 Challenge](http://dcase.community/challenge2022/). To have consistent reporting of results between participants on the development set a pre-defined training-testing split is provided. To compare against the challenge baseline and with other participants during the development stage, models should be trained on the training split only, and results should be reported on the testing split only.
**Note that even though there are two origins of the data, SONY and TAU, the challenge task considers the dataset as a single entity. Hence models should not be trained separately for each of the two origins, and tested individually on recordings of each of them. Instead, the recordings of the individual training splits (_dev-test-sony_, _dev_test_tau_) and testing splits (_dev-test-sony_, _dev_test_tau_) should be combined (_dev_train_, _dev_test_) and the models should be trained and evaluated in the respective combined splits.**
The evaluation part of the dataset will be published here as a new dataset version, a few weeks before the final challenge submission deadline. The additional evaluation files consist of only audio recordings without any metadata/labels. Participants can decide the training procedure, i.e. the amount of training and validation files in the development dataset, the number of ensemble models etc., and submit the results of the SELD performance on the evaluation dataset.
# File structure
```
dataset root
│ README.md this file, markdown-format
| LICENSE the license file
│
└───foa_dev Ambisonic format, 24kHz, four channels
| | dev-train-sony to be used for training when reporting development set results (SONY recordings)
│ │ | fold3_room21_mix001.wav
│ │ | fold3_room21_mix002.wav
│ │ | ...
│ │ | fold3_room22_mix001.wav
│ │ | fold3_room22_mix002.wav
│ | │ ...
| | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
│ │ | fold4_room23_mix001.wav
│ │ | fold4_room23_mix002.wav
│ │ | ...
│ │ | fold4_room24_mix001.wav
│ │ | fold4_room24_mix002.wav
│ │ | ...
| | dev-train-tau to be used for training when reporting development set results (TAU recordings)
│ │ | fold3_room4_mix001.wav
│ │ | fold3_room4_mix002.wav
│ │ | ...
│ │ | fold3_room6_mix001.wav
│ │ | fold3_room6_mix002.wav
│ | │ ...
│ │ | fold3_room7_mix001.wav
│ │ | fold3_room7_mix002.wav
│ | │ ...
│ │ | fold3_room9_mix001.wav
│ │ | fold3_room9_mix002.wav
│ | │ ...
| | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
│ │ | fold4_room2_mix001.wav
│ │ | fold4_room2_mix002.wav
│ │ | ...
│ │ | fold4_room8_mix001.wav
│ │ | fold4_room8_mix002.wav
│ │ | ...
│ │ | fold4_room10_mix001.wav
│ │ | fold4_room10_mix002.wav
│ │ | ...
│
└───mic_dev Microphone array format, 24kHz, four channels
| | dev-train-sony to be used for training when reporting development set results (SONY recordings)
│ │ | fold3_room21_mix001.wav
│ │ | fold3_room21_mix002.wav
│ │ | ...
│ │ | fold3_room22_mix001.wav
│ │ | fold3_room22_mix002.wav
│ | │ ...
| | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
│ │ | fold4_room23_mix001.wav
│ │ | fold4_room23_mix002.wav
│ │ | ...
│ │ | fold4_room24_mix001.wav
│ │ | fold4_room24_mix002.wav
│ │ | ...
| | dev-train-tau to be used for training when reporting development set results (TAU recordings)
│ │ | fold3_room4_mix001.wav
│ │ | fold3_room4_mix002.wav
│ │ | ...
│ │ | fold3_room6_mix001.wav
│ │ | fold3_room6_mix002.wav
│ | │ ...
│ │ | fold3_room7_mix001.wav
│ │ | fold3_room7_mix002.wav
│ | │ ...
│ │ | fold3_room9_mix001.wav
│ │ | fold3_room9_mix002.wav
│ | │ ...
| | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
│ │ | fold4_room2_mix001.wav
│ │ | fold4_room2_mix002.wav
│ │ | ...
│ │ | fold4_room8_mix001.wav
│ │ | fold4_room8_mix002.wav
│ │ | ...
│ │ | fold4_room10_mix001.wav
│ │ | fold4_room10_mix002.wav
│ │ | ...
│
└───metadata_dev `csv` format, 600 files
| | dev-train-sony to be used for training when reporting development set results (SONY recordings)
│ │ | fold3_room21_mix001.csv
│ │ | fold3_room21_mix002.csv
│ │ | ...
│ │ | fold3_room22_mix001.csv
│ │ | fold3_room22_mix002.csv
│ | │ ...
| | dev-test-sony to be used for testing when reporting development set results (SONY recordings)
│ │ | fold4_room23_mix001.csv
│ │ | fold4_room23_mix002.csv
│ │ | ...
│ │ | fold4_room24_mix001.csv
│ │ | fold4_room24_mix002.csv
│ │ | ...
| | dev-train-tau to be used for training when reporting development set results (TAU recordings)
│ │ | fold3_room4_mix001.csv
│ │ | fold3_room4_mix002.csv
│ │ | ...
│ │ | fold3_room6_mix001.csv
│ │ | fold3_room6_mix002.csv
│ | │ ...
│ │ | fold3_room7_mix001.csv
│ │ | fold3_room7_mix002.csv
│ | │ ...
│ │ | fold3_room9_mix001.csv
│ │ | fold3_room9_mix002.csv
│ | │ ...
| | dev-test-tau to be used for testing when reporting development set results (TAU recordings)
│ │ | fold4_room2_mix001.csv
│ │ | fold4_room2_mix002.csv
│ │ | ...
│ │ | fold4_room8_mix001.csv
│ │ | fold4_room8_mix002.csv
│ │ | ...
│ │ | fold4_room10_mix001.csv
│ │ | fold4_room10_mix002.csv
│ │ | ...
```
# Download
git clone
# Example application
An implementation of a trainable model of a convolutional recurrent neural network, performing joint SELD, trained and evaluated with this dataset is provided [here](https://github.com/sharathadavanne/seld-dcase2022). This implementation will serve as the baseline method in the DCASE 2022 Sound Event Localization and Detection Task.
# License
This datast is licensed under the [MIT](https://opensource.org/licenses/MIT) license.
|
AdapterOcean/gorilla_16k_standardized_unified | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 14956387
num_examples: 16250
download_size: 0
dataset_size: 14956387
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gorilla_16k_standardized_unified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adonaivera/crowdsourced-calculator-demo | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
BelleGroup/train_0.5M_CN | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约50万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
|
vikp/hermes_labeled_bad | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: rendered
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 17934912.391210347
num_examples: 6969
download_size: 5517616
dataset_size: 17934912.391210347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hermes_labeled_bad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
clinc_oos | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
paperswithcode_id: clinc150
pretty_name: CLINC150
dataset_info:
- config_name: imbalanced
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 546901
num_examples: 10625
- name: validation
num_bytes: 160298
num_examples: 3100
- name: test
num_bytes: 286966
num_examples: 5500
download_size: 441918
dataset_size: 994165
- config_name: plus
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 791247
num_examples: 15250
- name: validation
num_bytes: 160298
num_examples: 3100
- name: test
num_bytes: 286966
num_examples: 5500
download_size: 525729
dataset_size: 1238511
- config_name: small
features:
- name: text
dtype: string
- name: intent
dtype:
class_label:
names:
'0': restaurant_reviews
'1': nutrition_info
'2': account_blocked
'3': oil_change_how
'4': time
'5': weather
'6': redeem_rewards
'7': interest_rate
'8': gas_type
'9': accept_reservations
'10': smart_home
'11': user_name
'12': report_lost_card
'13': repeat
'14': whisper_mode
'15': what_are_your_hobbies
'16': order
'17': jump_start
'18': schedule_meeting
'19': meeting_schedule
'20': freeze_account
'21': what_song
'22': meaning_of_life
'23': restaurant_reservation
'24': traffic
'25': make_call
'26': text
'27': bill_balance
'28': improve_credit_score
'29': change_language
'30': 'no'
'31': measurement_conversion
'32': timer
'33': flip_coin
'34': do_you_have_pets
'35': balance
'36': tell_joke
'37': last_maintenance
'38': exchange_rate
'39': uber
'40': car_rental
'41': credit_limit
'42': oos
'43': shopping_list
'44': expiration_date
'45': routing
'46': meal_suggestion
'47': tire_change
'48': todo_list
'49': card_declined
'50': rewards_balance
'51': change_accent
'52': vaccines
'53': reminder_update
'54': food_last
'55': change_ai_name
'56': bill_due
'57': who_do_you_work_for
'58': share_location
'59': international_visa
'60': calendar
'61': translate
'62': carry_on
'63': book_flight
'64': insurance_change
'65': todo_list_update
'66': timezone
'67': cancel_reservation
'68': transactions
'69': credit_score
'70': report_fraud
'71': spending_history
'72': directions
'73': spelling
'74': insurance
'75': what_is_your_name
'76': reminder
'77': where_are_you_from
'78': distance
'79': payday
'80': flight_status
'81': find_phone
'82': greeting
'83': alarm
'84': order_status
'85': confirm_reservation
'86': cook_time
'87': damaged_card
'88': reset_settings
'89': pin_change
'90': replacement_card_duration
'91': new_card
'92': roll_dice
'93': income
'94': taxes
'95': date
'96': who_made_you
'97': pto_request
'98': tire_pressure
'99': how_old_are_you
'100': rollover_401k
'101': pto_request_status
'102': how_busy
'103': application_status
'104': recipe
'105': calendar_update
'106': play_music
'107': 'yes'
'108': direct_deposit
'109': credit_limit_change
'110': gas
'111': pay_bill
'112': ingredients_list
'113': lost_luggage
'114': goodbye
'115': what_can_i_ask_you
'116': book_hotel
'117': are_you_a_bot
'118': next_song
'119': change_speed
'120': plug_type
'121': maybe
'122': w2
'123': oil_change_when
'124': thank_you
'125': shopping_list_update
'126': pto_balance
'127': order_checks
'128': travel_alert
'129': fun_fact
'130': sync_device
'131': schedule_maintenance
'132': apr
'133': transfer
'134': ingredient_substitution
'135': calories
'136': current_location
'137': international_fees
'138': calculator
'139': definition
'140': next_holiday
'141': update_playlist
'142': mpg
'143': min_payment
'144': change_user_name
'145': restaurant_suggestion
'146': travel_notification
'147': cancel
'148': pto_used
'149': travel_suggestion
'150': change_volume
splits:
- name: train
num_bytes: 394124
num_examples: 7600
- name: validation
num_bytes: 160298
num_examples: 3100
- name: test
num_bytes: 286966
num_examples: 5500
download_size: 385185
dataset_size: 841388
configs:
- config_name: imbalanced
data_files:
- split: train
path: imbalanced/train-*
- split: validation
path: imbalanced/validation-*
- split: test
path: imbalanced/test-*
- config_name: plus
data_files:
- split: train
path: plus/train-*
- split: validation
path: plus/validation-*
- split: test
path: plus/test-*
- config_name: small
data_files:
- split: train
path: small/train-*
- split: validation
path: small/validation-*
- split: test
path: small/test-*
---
# Dataset Card for CLINC150
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/clinc/oos-eval/)
- **Repository:** [Github](https://github.com/clinc/oos-eval/)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1131)
- **Leaderboard:** [PapersWithCode](https://paperswithcode.com/sota/text-classification-on-clinc-oos)
- **Point of Contact:**
### Dataset Summary
Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.
### Supported Tasks and Leaderboards
- `intent-classification`: This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. [here](https://paperswithcode.com/sota/text-classification-on-clinc-oos).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'text' : 'can you walk me through setting up direct deposits to my bank of internet savings account',
'label' : 108
}
```
### Data Fields
- text : Textual data
- label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent.
The Label Id to Label Name map is mentioned in the table below:
| **Label Id** | **Label name** |
|--- |--- |
| 0 | restaurant_reviews |
| 1 | nutrition_info |
| 2 | account_blocked |
| 3 | oil_change_how |
| 4 | time |
| 5 | weather |
| 6 | redeem_rewards |
| 7 | interest_rate |
| 8 | gas_type |
| 9 | accept_reservations |
| 10 | smart_home |
| 11 | user_name |
| 12 | report_lost_card |
| 13 | repeat |
| 14 | whisper_mode |
| 15 | what_are_your_hobbies |
| 16 | order |
| 17 | jump_start |
| 18 | schedule_meeting |
| 19 | meeting_schedule |
| 20 | freeze_account |
| 21 | what_song |
| 22 | meaning_of_life |
| 23 | restaurant_reservation |
| 24 | traffic |
| 25 | make_call |
| 26 | text |
| 27 | bill_balance |
| 28 | improve_credit_score |
| 29 | change_language |
| 30 | no |
| 31 | measurement_conversion |
| 32 | timer |
| 33 | flip_coin |
| 34 | do_you_have_pets |
| 35 | balance |
| 36 | tell_joke |
| 37 | last_maintenance |
| 38 | exchange_rate |
| 39 | uber |
| 40 | car_rental |
| 41 | credit_limit |
| 42 | oos |
| 43 | shopping_list |
| 44 | expiration_date |
| 45 | routing |
| 46 | meal_suggestion |
| 47 | tire_change |
| 48 | todo_list |
| 49 | card_declined |
| 50 | rewards_balance |
| 51 | change_accent |
| 52 | vaccines |
| 53 | reminder_update |
| 54 | food_last |
| 55 | change_ai_name |
| 56 | bill_due |
| 57 | who_do_you_work_for |
| 58 | share_location |
| 59 | international_visa |
| 60 | calendar |
| 61 | translate |
| 62 | carry_on |
| 63 | book_flight |
| 64 | insurance_change |
| 65 | todo_list_update |
| 66 | timezone |
| 67 | cancel_reservation |
| 68 | transactions |
| 69 | credit_score |
| 70 | report_fraud |
| 71 | spending_history |
| 72 | directions |
| 73 | spelling |
| 74 | insurance |
| 75 | what_is_your_name |
| 76 | reminder |
| 77 | where_are_you_from |
| 78 | distance |
| 79 | payday |
| 80 | flight_status |
| 81 | find_phone |
| 82 | greeting |
| 83 | alarm |
| 84 | order_status |
| 85 | confirm_reservation |
| 86 | cook_time |
| 87 | damaged_card |
| 88 | reset_settings |
| 89 | pin_change |
| 90 | replacement_card_duration |
| 91 | new_card |
| 92 | roll_dice |
| 93 | income |
| 94 | taxes |
| 95 | date |
| 96 | who_made_you |
| 97 | pto_request |
| 98 | tire_pressure |
| 99 | how_old_are_you |
| 100 | rollover_401k |
| 101 | pto_request_status |
| 102 | how_busy |
| 103 | application_status |
| 104 | recipe |
| 105 | calendar_update |
| 106 | play_music |
| 107 | yes |
| 108 | direct_deposit |
| 109 | credit_limit_change |
| 110 | gas |
| 111 | pay_bill |
| 112 | ingredients_list |
| 113 | lost_luggage |
| 114 | goodbye |
| 115 | what_can_i_ask_you |
| 116 | book_hotel |
| 117 | are_you_a_bot |
| 118 | next_song |
| 119 | change_speed |
| 120 | plug_type |
| 121 | maybe |
| 122 | w2 |
| 123 | oil_change_when |
| 124 | thank_you |
| 125 | shopping_list_update |
| 126 | pto_balance |
| 127 | order_checks |
| 128 | travel_alert |
| 129 | fun_fact |
| 130 | sync_device |
| 131 | schedule_maintenance |
| 132 | apr |
| 133 | transfer |
| 134 | ingredient_substitution |
| 135 | calories |
| 136 | current_location |
| 137 | international_fees |
| 138 | calculator |
| 139 | definition |
| 140 | next_holiday |
| 141 | update_playlist |
| 142 | mpg |
| 143 | min_payment |
| 144 | change_user_name |
| 145 | restaurant_suggestion |
| 146 | travel_notification |
| 147 | cancel |
| 148 | pto_used |
| 149 | travel_suggestion |
| 150 | change_volume |
### Data Splits
The dataset comes in different subsets:
- `small` : Small, in which there are only 50 training queries per each in-scope intent
- `imbalanced` : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries.
- `plus`: OOS+, in which there are 250 out-of-scope training examples, rather than 100.
| name |train|validation|test|
|----------|----:|---------:|---:|
|small|7600| 3100| 5500 |
|imbalanced|10625| 3100| 5500|
|plus|15250| 3100| 5500|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
}
```
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
MohammedNasri/Test | ---
dataset_info:
features:
- name: audio
dtype: binary
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 78569
num_examples: 1
download_size: 79523
dataset_size: 78569
---
# Dataset Card for "Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tonyhacker/8bit | ---
license: openrail
---
|
sdansdk/processed_meta_review | ---
dataset_info:
features:
- name: Input
dtype: string
- name: Output
dtype: string
splits:
- name: train
num_bytes: 81617052
num_examples: 7680
- name: validation
num_bytes: 17524553
num_examples: 1645
- name: test
num_bytes: 17471237
num_examples: 1645
download_size: 58593680
dataset_size: 116612842
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Arefffffffffffffffffff/Mirza | ---
license: mit
---
|
mtkinit/dsaaseas | ---
pretty_name: dsaaseas
---
# dsaaseas
Created from AIOD platform |
Sarangr2005/Terms_and_conditions | ---
license: llama2
---
|
satishsatpal/trial_pamb1 | ---
license: mit
---
|
wenqiglantz/databricks-dolly-1k | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1338157
num_examples: 1000
download_size: 842842
dataset_size: 1338157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a subset (1000 samples) of [`databricks/databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset, processed to match Mistral-7B-instruct-v0.2's prompt format. It was created using the [colab notebook](https://colab.research.google.com/drive/1sRy-FT4nqOKG9_K6i4txgDKYkOAdCfBQ?usp=sharing).
|
Hengzongshu/math_LinearAlgebra_ner | ---
license: mit
task_categories:
- token-classification
language:
- en
tags:
- math
size_categories:
- 10K<n<100K
---
这是一个基于维基百科制作的轻量训练集,主要用于数学线性代数领域的英语命名实体识别,以及知识图谱构建。
根据维基百科词条对链接进行串联选取,共有234句,无标点符号、数字,不区分字母大小写。
标签标注使用Doccano,有jsonl,dataset与txt版本。(jsonl可转换为json)
This is a lightweight training dataset created based on Wikipedia, primarily used for English Named Entity Recognition (NER) in the field of mathematics linear algebra, as well as knowledge graph construction.
Link selections were made based on Wikipedia entries, resulting in 234 sentences. The sentences contain no punctuation, numbers, and are case-insensitive.
Label annotations were done using Doccano, available in both JSONL DATASET and TXT formats. (JSONL can be converted to JSON). |
natarojas/luffydois | ---
license: openrail
---
|
hajili/azerbaijani_tweet_emotion_classification | ---
license: mit
task_categories:
- text-classification
language:
- az
size_categories:
- 100K<n<1M
---
This dataset contains 150K (train + test) cleaned tweets in Azerbaijani. Tweets were collected in 2021, and filtered and cleaned by following these steps:
- Initial data were collected by using twint library. The tool is currently deprecated, cannot be used with new Twitter.
- On top of the already filtered data, I applied an additional filter to select Azerbaijani tweets with using fastText language identification model.
- Tweets were classified into 3 emotion categories: {positive: 1, negative: -1, neutral: 0} by using emojis as rule-based classifier.
- Tags, usernames, and emojis were later cleaned.
- Short tweets were filtered out. |
epinnock/evol-instruct-10k-with-embeddings-and-feedback-no-dups | ---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
- name: embeddings
sequence: float64
- name: cluster
dtype: int64
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: rationale
sequence: string
splits:
- name: train
num_bytes: 153991530
num_examples: 9863
download_size: 91660784
dataset_size: 153991530
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
senhorsapo/rouge | ---
license: openrail
---
|
rodrigofernandesgo/AI_Business_Strategies_and_Applications | ---
license: apache-2.0
---
|
Ekimetrics/climateqa-ipcc-ipbes-reports-1.0 | ---
license: apache-2.0
---
|
polplop/cnndm_llama2_7b_chat_summary | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: clean_summary
dtype: string
- name: summary_summacConv_scores
dtype: float64
- name: highlight_summacConv_scores
dtype: float64
splits:
- name: test
num_bytes: 813399
num_examples: 200
download_size: 538654
dataset_size: 813399
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cnndm_llama2_7b_chat_summary"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Prometutas/elivoz | ---
license: openrail
---
|
lumatic-ai/BongChat-v0-10k | ---
license: cc
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 30070250
num_examples: 9500
download_size: 11212453
dataset_size: 30070250
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nexdata/Spanish_Conversational_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Spanish_Conversational_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1147?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
About 1000 speakers participated in the recording, and conducted face-to-face communication in a natural way. They had free discussion on a number of given topics, with a wide range of fields; the voice was natural and fluent, in line with the actual dialogue scene. Text is transferred manually, with high accuracy.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1147?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Spain
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
mnoukhov/openai_summarize_generated_10-20k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 17968583
num_examples: 10000
download_size: 10938682
dataset_size: 17968583
---
# Dataset Card for "openai_summarize_generated_10-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kaga_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kaga/加賀/加贺 (Azur Lane)
This is the dataset of kaga/加賀/加贺 (Azur Lane), containing 500 images and their tags.
The core tags of this character are `animal_ears, fox_ears, white_hair, short_hair, blue_eyes, tail, fox_tail, breasts, bangs, multiple_tails, fox_girl, large_breasts, animal_ear_fluff, blunt_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 849.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaga_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 434.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaga_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1289 | 965.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaga_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 733.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kaga_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1289 | 1.43 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kaga_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kaga_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | looking_at_viewer, 1girl, solo, cleavage, blush, navel, blue_bikini, collarbone, fox_mask, simple_background, white_background, bare_shoulders, mask_on_head, smile, sarong, closed_mouth |
| 1 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, simple_background, solo, white_background, makeup, smile |
| 2 | 23 |  |  |  |  |  | 1girl, solo, looking_at_viewer, white_kimono, wide_sleeves, kitsune, eyeshadow, cleavage, upper_body, eyeliner |
| 3 | 12 |  |  |  |  |  | 1girl, blue_skirt, fox_mask, solo, white_kimono, wide_sleeves, holding_mask, looking_at_viewer, cleavage, hakama_short_skirt, smile, makeup, pleated_skirt, kitsune |
| 4 | 25 |  |  |  |  |  | 1girl, fox_mask, white_kimono, fur_trim, solo, mask_on_head, black_gloves, looking_at_viewer, wide_sleeves, fingerless_gloves, eyeshadow, kitsune, eyeliner, smile, blush, hamaya, holding, obi |
| 5 | 5 |  |  |  |  |  | blue_sky, cleavage, cloud, navel, 1girl, day, eyeliner, looking_at_viewer, outdoors, solo, white_bikini, cowboy_shot, kitsune, white_tail, cameltoe, floral_print |
| 6 | 6 |  |  |  |  |  | 1girl, abs, blue_sky, day, looking_at_viewer, miniskirt, navel, ocean, outdoors, solo, water, cleavage, gigantic_breasts, huge_breasts, lips, pleated_skirt, shiny_skin, standing, thick_thighs, curvy, revealing_clothes, smile, sunlight, thigh_gap, veins, alternate_breast_size, artist_name, beach, black_bikini, black_skirt, blush, cloudy_sky, cowboy_shot, eyeshadow, panties, ship, wide_sleeves |
| 7 | 27 |  |  |  |  |  | blue_dress, cleavage, looking_at_viewer, 1girl, official_alternate_costume, bare_shoulders, solo, thighs, necklace, large_tail, sitting, halter_dress, blush, kyuubi, evening_gown, smile, white_tail, eyeliner, sleeveless_dress |
| 8 | 5 |  |  |  |  |  | 1boy, 1girl, blush, cum_in_pussy, hetero, huge_breasts, navel, sex, solo_focus, vaginal, bob_cut, completely_nude, cowgirl_position, girl_on_top, looking_at_viewer, nipples, penis, pov, smile, spread_legs, collarbone, cum_on_breasts, thighs, white_tail, closed_mouth, kitsune, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | solo | cleavage | blush | navel | blue_bikini | collarbone | fox_mask | simple_background | white_background | bare_shoulders | mask_on_head | smile | sarong | closed_mouth | makeup | white_kimono | wide_sleeves | kitsune | eyeshadow | upper_body | eyeliner | blue_skirt | holding_mask | hakama_short_skirt | pleated_skirt | fur_trim | black_gloves | fingerless_gloves | hamaya | holding | obi | blue_sky | cloud | day | outdoors | white_bikini | cowboy_shot | white_tail | cameltoe | floral_print | abs | miniskirt | ocean | water | gigantic_breasts | huge_breasts | lips | shiny_skin | standing | thick_thighs | curvy | revealing_clothes | sunlight | thigh_gap | veins | alternate_breast_size | artist_name | beach | black_bikini | black_skirt | cloudy_sky | panties | ship | blue_dress | official_alternate_costume | thighs | necklace | large_tail | sitting | halter_dress | kyuubi | evening_gown | sleeveless_dress | 1boy | cum_in_pussy | hetero | sex | solo_focus | vaginal | bob_cut | completely_nude | cowgirl_position | girl_on_top | nipples | penis | pov | spread_legs | cum_on_breasts | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:-------|:-----------|:--------|:--------|:--------------|:-------------|:-----------|:--------------------|:-------------------|:-----------------|:---------------|:--------|:---------|:---------------|:---------|:---------------|:---------------|:----------|:------------|:-------------|:-----------|:-------------|:---------------|:---------------------|:----------------|:-----------|:---------------|:--------------------|:---------|:----------|:------|:-----------|:--------|:------|:-----------|:---------------|:--------------|:-------------|:-----------|:---------------|:------|:------------|:--------|:--------|:-------------------|:---------------|:-------|:-------------|:-----------|:---------------|:--------|:--------------------|:-----------|:------------|:--------|:------------------------|:--------------|:--------|:---------------|:--------------|:-------------|:----------|:-------|:-------------|:-----------------------------|:---------|:-----------|:-------------|:----------|:---------------|:---------|:---------------|:-------------------|:-------|:---------------|:---------|:------|:-------------|:----------|:----------|:------------------|:-------------------|:--------------|:----------|:--------|:------|:--------------|:-----------------|:-------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | | | | | | | X | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 23 |  |  |  |  |  | X | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | X | X | X | X | | | | | X | | | | | X | | | X | X | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 25 |  |  |  |  |  | X | X | X | | X | | | | X | | | | X | X | | | | X | X | X | X | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | X | | | | | X | | X | | | | | | X | | | | | | | X | | X | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 27 |  |  |  |  |  | X | X | X | X | X | | | | | | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | | | X | X | | X | | | | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
tyzhu/find_first_sent_train_500_eval_20_baseline | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 750626
num_examples: 442
- name: validation
num_bytes: 38037
num_examples: 20
download_size: 0
dataset_size: 788663
---
# Dataset Card for "find_first_sent_train_500_eval_20_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nise/NisePhotos | ---
license: openrail
---
|
CyberHarem/satsuki_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of satsuki/皐月 (Kantai Collection)
This is the dataset of satsuki/皐月 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `long_hair, blonde_hair, twintails, yellow_eyes, low_twintails, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 508.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 302.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1210 | 658.43 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 456.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1210 | 908.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/satsuki_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_sailor_collar, black_skirt, black_thighhighs, blue_jacket, brown_footwear, crescent_pin, long_sleeves, serafuku, smile, solo, armband, looking_at_viewer, open_mouth, shoes, simple_background, white_background, yellow_neckerchief, necktie, blush, pleated_skirt |
| 1 | 9 |  |  |  |  |  | 1girl, black_skirt, black_thighhighs, crescent_pin, long_sleeves, serafuku, solo, black_sailor_collar, blue_jacket, looking_at_viewer, yellow_neckerchief, open_mouth, smile, yellow_necktie, blush |
| 2 | 24 |  |  |  |  |  | 1girl, black_sailor_collar, crescent_pin, serafuku, solo, upper_body, simple_background, yellow_neckerchief, blue_jacket, long_sleeves, white_background, armband, smile, looking_at_viewer, blush, yellow_necktie, open_mouth |
| 3 | 7 |  |  |  |  |  | 1girl, black_serafuku, black_skirt, black_thighhighs, long_sleeves, looking_at_viewer, sailor_collar, simple_background, solo, blush, crescent_pin, neckerchief, white_background, very_long_hair, pleated_skirt, white_necktie, bangs, belt, cowboy_shot, sitting |
| 4 | 22 |  |  |  |  |  | 1girl, long_sleeves, solo, black_serafuku, looking_at_viewer, smile, necktie, black_thighhighs, blush, black_skirt, open_mouth, simple_background, crescent_pin, white_background |
| 5 | 9 |  |  |  |  |  | 1girl, solo, cowboy_shot, looking_at_viewer, bikini, navel, white_background, blush, simple_background, smile, collarbone, open_mouth, white_shirt |
| 6 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, collarbone, blue_one-piece_swimsuit, simple_background, smile, twitter_username, cowboy_shot, open_mouth, covered_navel, old_school_swimsuit, white_background |
| 7 | 8 |  |  |  |  |  | black_dress, enmaided, maid_headdress, 1girl, maid_apron, white_apron, solo, frilled_apron, smile, blush, puffy_sleeves, short_sleeves, simple_background, thighhighs, black_footwear, crescent_pin, full_body, long_sleeves, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_sailor_collar | black_skirt | black_thighhighs | blue_jacket | brown_footwear | crescent_pin | long_sleeves | serafuku | smile | solo | armband | looking_at_viewer | open_mouth | shoes | simple_background | white_background | yellow_neckerchief | necktie | blush | pleated_skirt | yellow_necktie | upper_body | black_serafuku | sailor_collar | neckerchief | very_long_hair | white_necktie | bangs | belt | cowboy_shot | sitting | bikini | navel | collarbone | white_shirt | blue_one-piece_swimsuit | twitter_username | covered_navel | old_school_swimsuit | black_dress | enmaided | maid_headdress | maid_apron | white_apron | frilled_apron | puffy_sleeves | short_sleeves | thighhighs | black_footwear | full_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------------|:--------------|:-------------------|:--------------|:-----------------|:---------------|:---------------|:-----------|:--------|:-------|:----------|:--------------------|:-------------|:--------|:--------------------|:-------------------|:---------------------|:----------|:--------|:----------------|:-----------------|:-------------|:-----------------|:----------------|:--------------|:-----------------|:----------------|:--------|:-------|:--------------|:----------|:---------|:--------|:-------------|:--------------|:--------------------------|:-------------------|:----------------|:----------------------|:--------------|:-----------|:-----------------|:-------------|:--------------|:----------------|:----------------|:----------------|:-------------|:-----------------|:------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | | X | X | | | | X | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 24 |  |  |  |  |  | X | X | | | X | | X | X | X | X | X | X | X | X | | X | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 7 |  |  |  |  |  | X | | X | X | | | X | X | | | X | | X | | | X | X | | | X | X | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 4 | 22 |  |  |  |  |  | X | | X | X | | | X | X | | X | X | | X | X | | X | X | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | | | | | | | | X | X | | X | X | | X | X | | | X | | | | | | | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 10 |  |  |  |  |  | X | | | | | | | | | X | X | | X | X | | X | X | | | X | | | | | | | | | | | X | | | | X | | X | X | X | X | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | | | | X | X | | X | X | | | X | | X | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
eurafamaciel/aeon | ---
license: mit
---
|
dariolopez/Llama-2-databricks-dolly-oasst1-es-lower-512-tokens | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15747514.054216867
num_examples: 16302
download_size: 6110375
dataset_size: 15747514.054216867
language:
- es
size_categories:
- 10K<n<100K
---
# Llama-2-databricks-dolly-oasst1-es-lower-512-tokens
Union of https://huggingface.co/datasets/dariolopez/Llama-2-databricks-dolly-es and https://huggingface.co/datasets/dariolopez/Llama-2-oasst1-es
Filtering of texts with less than 512 tokens. |
theblackcat102/llm-plugins | ---
license: cc-by-nc-4.0
task_categories:
- text2text-generation
language:
- en
- zh
size_categories:
- n<1K
---
A transformed version of MOSS [tool use dataset](https://github.com/OpenLMLab/MOSS/tree/main/SFT_data/conversations/conversation_with_plugins).
Currently its only 500 conversations.
Changes:
1. Easy to integrate to existing conversation like dataset following formats like (evol v2, lima) where a full conversations are stored in list with even index represent the human prompt and odd index represent model response.
```json
{
"conversations": [
"Can you create a cityscape with buildings and a mountain in the background?",
"<|thought|>The user's demand is to draw picture, and I need to generate commands that can draw high-quality image according to the user's needs.<|command|>Text2Image(\"a city with buildings and a mountain in the background\")",
"Image generated successfully.",
"Well, here is the generated image."
],
"settings": "- Inner thoughts: enabled.\n- Web search: disabled.\n- Calculator: disabled.\n- Equation solver: disabled.\n- Text-to-image: enabled. API: Text2Image(description)\n- Image edition: disabled.\n- Text-to-speech: disabled.\n",
"mode": "text2img"
}
```
2. Move the settings out to column and which the users can choose whether to prepend it back to first conversation round for setting |
CyberHarem/sumino_sayaka_ahogirl | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Sumino Sayaka
This is the dataset of Sumino Sayaka, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 424 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 424 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 424 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 424 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
Lollitor/POCKETMARKED | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: ID
dtype: string
- name: INPUT
dtype: string
- name: LABEL
dtype: float64
splits:
- name: train
num_bytes: 5614688.369673617
num_examples: 7941
- name: validation
num_bytes: 624325.6303263826
num_examples: 883
download_size: 3119101
dataset_size: 6239014.0
---
# Dataset Card for "POCKETMARKED"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TuringsSolutions/GlobalFunctionCallingTrainingSetSmall | ---
license: mit
---
|
open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data | ---
pretty_name: Evaluation run of camel-ai/CAMEL-13B-Role-Playing-Data
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [camel-ai/CAMEL-13B-Role-Playing-Data](https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T02:33:54.730423](https://huggingface.co/datasets/open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data/blob/main/results_2023-10-25T02-33-54.730423.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004404362416107382,\n\
\ \"em_stderr\": 0.000678145162047963,\n \"f1\": 0.06661703020134248,\n\
\ \"f1_stderr\": 0.001491591221438747,\n \"acc\": 0.4069360263718957,\n\
\ \"acc_stderr\": 0.009756268229958965\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.004404362416107382,\n \"em_stderr\": 0.000678145162047963,\n\
\ \"f1\": 0.06661703020134248,\n \"f1_stderr\": 0.001491591221438747\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07354056103108415,\n \
\ \"acc_stderr\": 0.007189835754365264\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7403314917127072,\n \"acc_stderr\": 0.012322700705552667\n\
\ }\n}\n```"
repo_url: https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|arc:challenge|25_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_25T02_33_54.730423
path:
- '**/details_harness|drop|3_2023-10-25T02-33-54.730423.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T02-33-54.730423.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_25T02_33_54.730423
path:
- '**/details_harness|gsm8k|5_2023-10-25T02-33-54.730423.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T02-33-54.730423.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hellaswag|10_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:40:55.376784.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T18:40:55.376784.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T18:40:55.376784.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_25T02_33_54.730423
path:
- '**/details_harness|winogrande|5_2023-10-25T02-33-54.730423.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T02-33-54.730423.parquet'
- config_name: results
data_files:
- split: 2023_07_19T18_40_55.376784
path:
- results_2023-07-19T18:40:55.376784.parquet
- split: 2023_10_25T02_33_54.730423
path:
- results_2023-10-25T02-33-54.730423.parquet
- split: latest
path:
- results_2023-10-25T02-33-54.730423.parquet
---
# Dataset Card for Evaluation run of camel-ai/CAMEL-13B-Role-Playing-Data
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [camel-ai/CAMEL-13B-Role-Playing-Data](https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T02:33:54.730423](https://huggingface.co/datasets/open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data/blob/main/results_2023-10-25T02-33-54.730423.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004404362416107382,
"em_stderr": 0.000678145162047963,
"f1": 0.06661703020134248,
"f1_stderr": 0.001491591221438747,
"acc": 0.4069360263718957,
"acc_stderr": 0.009756268229958965
},
"harness|drop|3": {
"em": 0.004404362416107382,
"em_stderr": 0.000678145162047963,
"f1": 0.06661703020134248,
"f1_stderr": 0.001491591221438747
},
"harness|gsm8k|5": {
"acc": 0.07354056103108415,
"acc_stderr": 0.007189835754365264
},
"harness|winogrande|5": {
"acc": 0.7403314917127072,
"acc_stderr": 0.012322700705552667
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
MoonIcee/Lucas | ---
license: openrail
---
|
tyzhu/fw_num_bi_train_1000_eval_100 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_doc2id
path: data/train_doc2id-*
- split: train_id2doc
path: data/train_id2doc-*
- split: train_find_word
path: data/train_find_word-*
- split: eval_find_word
path: data/eval_find_word-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 225375
num_examples: 3200
- name: train_doc2id
num_bytes: 87993
num_examples: 1100
- name: train_id2doc
num_bytes: 91293
num_examples: 1100
- name: train_find_word
num_bytes: 46089
num_examples: 1000
- name: eval_find_word
num_bytes: 4723
num_examples: 100
download_size: 104282
dataset_size: 455473
---
# Dataset Card for "fw_num_bi_train_1000_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hle2000/Mintaka_Sequences_T5-large-ssm | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: answerEntity
dtype: string
- name: questionEntity
dtype: string
- name: groundTruthAnswerEntity
dtype: string
- name: complexityType
dtype: string
- name: graph
dtype: string
- name: correct
dtype: bool
- name: updated_sequence
dtype: string
- name: highlighted_updated_sequence
dtype: string
- name: no_highlighted_updated_sequence
dtype: string
- name: highlighted_sequence
dtype: string
- name: no_highlighted_sequence
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 184488292
num_examples: 75640
- name: test
num_bytes: 46498128
num_examples: 19134
download_size: 49836177
dataset_size: 230986420
---
# Dataset Card for "Mintaka_Sequences_T5-large-ssm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RtwC/people | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PE
'2': I-PE
'3': B-OR
'4': I-OR
'5': B-LO
'6': I-LO
splits:
- name: train
num_bytes: 14972408
num_examples: 20865
- name: validation
num_bytes: 1676725
num_examples: 2319
- name: test
num_bytes: 3346959
num_examples: 4637
download_size: 2731946
dataset_size: 19996092
---
# Dataset Card for "people"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingface/autotrain-data-paytest3 | Invalid username or password. |
davanstrien/biglamdemo | ---
license: cc-by-4.0
---
|
CodeBlackwell/avant_assist | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: test
num_bytes: 4091597
num_examples: 3452
- name: train
num_bytes: 45222545
num_examples: 22845
download_size: 15363271
dataset_size: 49314142
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
yuan-sf63/word_label_0.5_64_P | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
splits:
- name: train
num_bytes: 44942972.28061179
num_examples: 70906
- name: validation
num_bytes: 4994015.719388208
num_examples: 7879
download_size: 9294335
dataset_size: 49936988.0
---
# Dataset Card for "word_label_0.5_64_P"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperbPrivate/NoiseSNRLevelPredictionMusic_VoxcelebMusan | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 7723521946.0
num_examples: 60000
- name: validation
num_bytes: 1679224822.0
num_examples: 13045
- name: test
num_bytes: 3137034391.0
num_examples: 24370
download_size: 12518945233
dataset_size: 12539781159.0
---
# Dataset Card for "NoiseSNRLevelPredictionmusic_VoxcelebMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marmofayezi/M3CelebA-All | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: mask
dtype: image
- name: landmark_image
dtype: image
- name: landmark
sequence: int32
- name: captions_eng
sequence: string
- name: captions_pes
sequence: string
- name: captions_fra
sequence: string
- name: captions_deu
sequence: string
- name: captions_ita
sequence: string
- name: captions_spa
sequence: string
- name: captions_all
sequence: string
splits:
- name: train
num_bytes: 75141631604.5
num_examples: 196476
- name: test
num_bytes: 2307124529.125
num_examples: 5997
download_size: 26602426069
dataset_size: 77448756133.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jalalnb/invisible_char_on_small_persian_QA | ---
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: validation
num_bytes: 335122
num_examples: 130
- name: train
num_bytes: 3266777
num_examples: 1261
download_size: 1123718
dataset_size: 3601899
---
# Dataset Card for "invisible_char_on_small_persian_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
openaccess-ai-collective/90e12fff86bd2e33c44003f8804cdce7 | Invalid username or password. |
AdapterOcean/Open_Platypus_standardized_cluster_3_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7197842
num_examples: 5225
download_size: 0
dataset_size: 7197842
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Open_Platypus_standardized_cluster_3_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
phamnam-misa/tts | ---
license: apache-2.0
language:
- vi
--- |
SeacowX/OpenToM | ---
task_categories:
- question-answering
- text-classification
- text-generation
language:
- en
pretty_name: OpenToM
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: Long
path: "opentom.json"
- split: ExtraLong
path: "opentom_long.json"
---
<p align="center">
<img src="assets/figures/opentom_logo.png" width="480">
</p>
<span style="color:red;" align="center;">Please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span>
OpenToM is a new benchmark for assessing LLMs' Neural Theory-of-Mind (N-ToM) with the following key features:
(1) longer and clearer narrative stories
(2) characters with explicit personality traits
(3) actions that are triggered by character intentions
(4) questions designed to challenge LLMs' capabilities of modeling characters' mental states of both the physical and psychological world.
## Dataset Details
The OpenToM benchmark contains 696 narratives, 596 of which are narratives of normal length (average word count: 194.3 words) and 100 of which are long narratives (average word count: 491.6 words).
Each of the narrative is followed with 23 ToM questions, making a total of 16008 questions.
The OpenToM benchmark pose first-order and second-order questions in the following genres:
1. **Location**: this is a prevelant type of question seen in many ToM benchmarks. We break location questions into *coarse* and *fine*, differ by granularity. *Coarse* questions ask if a character thinks that an entity is in its initial location where as *fine* questions ask the precise location of an entity.
2. **Multihop**: we compose questions that demand an additional reasoning hop on top of the *Location* questions. Specifically, we inquire characters' perception of the *fullness* and the *accessibility* of an entity. We incoporate **social commonsense** in the *accessibility* questions. For instance, if an entity is moved into someone's bag, then it beomces *less accessible* to others since people shall not access other's bag without asking for permission.
3. **Attitude**: LLMs' capability of understanding character's perception of the psychological world has been overlooked by many established N-ToM benchmarks. We propose the *attitude* question to test LLMs' capabilities in understanding character's attitude towards some events. For instance, if my favorite rubber duck is taken away from me without asking, I would hold a *negative* attitude towards this event.
All the OpenToM questions are designed to be a binary or ternary classification task. We recommend using *macro-averaged F1 score* to evaluate LLMs' performance as the labels are not uniformly distributed.
### Dataset Description
- **Curated by:** KclNLP
- **Funded by [optional]:** KclNLP
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution-NonCommercial 4.0 International Public License
### Dataset Generating Process
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/seacowx/OpenToM
- **Paper:** https://arxiv.org/pdf/2402.06044.pdf
## Uses
The OpenToM dataset is designed to benchmark the performance of LLMs. **It shall not be used for training or fine-tuning. Therefore, <span style="color:red">please avoid testing OpenToM questions in OpenAI playground or places where the data might be used for LLM training.</span>**
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
`opentom.json` contains the QA data with 13,708 questions derived from 596 OpenToM stories of normal length
`opentom_long.json` contains the QA data with 2,300 questions derived from 100 OpenToM long stories
To access individual question types, nevigate to the **`opentom_data`** folder, in which there is a `metadata.json / metadata_long.json` file containing the metadata of OpenToM. The other JSON files store OpenToM questions of each genre asked for either first-order (fo) or second-order (so) ToM.
- `location_cg_fo`: Coarse location questions asking about characters' belief of whether an entity is in its initial location (First-Order).
- `location_cg_so`: Coarse location questions asking about characters' belief of whether another character believes that an entity is in its initial location (Second-Order)
- `location_fg_fo`: Fine location questions asking about characters' belief of the precise location of an entity (First-Order).
- `location_fg_so`: Fine location questions asking about characters' belief of another character's belief of the precise location of an entity (Second-Order).
- `multihop_fo`: Multihop questions that requesting additional reasoning hops based on location questions (First-Order).
- `multihop_so`: Multihop questions that requesting additional reasoning hops based on location questions (Second-Order).
- `attitude`: Questions inquire about characters' attitude towards others' actions.
Each metadata contains the following information:
- `plot`: stores the OpenToM plot used to produce an OpenToM story.
- `plot_info`: stores the key information in OpenToM plot, which include the two protangonists, the entity-of-interest, and the two containers.
- `preferences`: stores the first-order and second-order preference belief of the characters.
- `personality`: stores the presonality trait of the *mover*.
- `sentiment_statement`: stores the *mover*'s latent sentiment towards the entity-of-interest.
- `true_sentiment`: stores the *mover*'s latent sentiment towards the entity-of-interest.
- `intention`: stores the *mover*'s latent intention towards the entity-of-interest.
- `new_location`: the new location (fine-grained) of the entity.
- `observed`: documents whether the *observer* witnessed the *mover*'s action.
- `narrative`: the OpenToM narrative.
## Dataset Creation

## Acknowledgement
Part of the contents of our story generation plots are derived from the [ToMi dataset](https://github.com/facebookresearch/ToMi). We wish to thank them for generously making the ToMi dataset publicaly available.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The drafts of OpenToM stories are composed using LLMs. Although some of the stories went through human revision, we acknowledge that the texts generated by LLMs could contain biases and lack lexical diversity.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you find our benchmark useful, please cite our work:
**BibTeX:**
```
@article{xu2024opentom,
title={OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models},
author={Xu, Hainiu and Zhao, Runcong and Zhu, Lixing and Du, Jinhua and He, Yulan},
journal={arXiv preprint arXiv:2402.06044},
year={2024}
}
```
## Dataset Card Contact
For any question or inquiry about the OpenToM benchmark, please email [hainiu.xu@kcl.ac.uk](mailto:hainiu.xu@kcl.ac.uk)
<p align="center">
<img src="assets/figures/KCLNLP.png" width="256">
</p> |
gogogogo-1/gushen-test | ---
license: bigscience-openrail-m
language:
- ch
---
language:
- "List of ISO 639-1 code for your language"
- lang1
- lang2
pretty_name: "Pretty Name of the Dataset"
tags:
- tag1
- tag2
license: "any valid license identifier"
task_categories:
- task1
- task2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.