datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
C-MTEB/MedicalRetrieval-qrels | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: qid
dtype: string
- name: pid
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 26893
num_examples: 1000
download_size: 12201
dataset_size: 26893
---
# Dataset Card for "MedicalRetrieval-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Locutusque/prepared-automathtext | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1889390118
num_examples: 383141
download_size: 810629159
dataset_size: 1889390118
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
---
# Description
Taken the awesome [math-ai/AutoMathText](https://huggingface.co/datasets/math-ai/AutoMathText) and merged the following subsets for use in UltraTextbooks-2.0:
- code-jupyter-notebook-0.60-to-1.00
- code-python-0.60-to-1.00
- web-0.80-to-1.00 |
liuyanchen1015/MULTI_VALUE_mnli_who_as | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 85095
num_examples: 333
- name: dev_mismatched
num_bytes: 128362
num_examples: 462
- name: test_matched
num_bytes: 87847
num_examples: 334
- name: test_mismatched
num_bytes: 133874
num_examples: 483
- name: train
num_bytes: 3698619
num_examples: 13760
download_size: 2533331
dataset_size: 4133797
---
# Dataset Card for "MULTI_VALUE_mnli_who_as"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Des1gn-1/Masculinoadulto3 | ---
license: openrail
---
|
sheik21/musica-leoo | ---
license: openrail
---
|
kblw/treemap_sat_ft | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': train
'1': val
splits:
- name: train
num_bytes: 14946720.874
num_examples: 2533
- name: validation
num_bytes: 64576.0
num_examples: 54
download_size: 12885170
dataset_size: 15011296.874
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
bboytips/newdataset | ---
license: other
---
|
mboth/waermeVersorgen-100-undersampled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Datatype
dtype: string
- name: Beschreibung
dtype: string
- name: Name
dtype: string
- name: Unit
dtype: string
- name: text
dtype: string
- name: Grundfunktion
dtype: string
- name: label
dtype:
class_label:
names:
'0': Beziehen
'1': Erzeugen
'2': Speichern
'3': Verteilen
splits:
- name: train
num_bytes: 78794.01633914771
num_examples: 400
- name: test
num_bytes: 447086
num_examples: 2265
- name: valid
num_bytes: 447086
num_examples: 2265
download_size: 355050
dataset_size: 972966.0163391477
---
# Dataset Card for "waermeVersorgen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cluneau/github-issues | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: HF Datasets GitHub Issues
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-label-classification
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: comments
sequence: string
- name: created_at
dtype: int64
- name: updated_at
dtype: int64
- name: closed_at
dtype: int64
- name: author_association
dtype: string
- name: draft
dtype: float64
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 12013382
num_examples: 2242
download_size: 3940692
dataset_size: 12013382
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Leyo/ActivityNet_Captions | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: ActivityNet Captions
size_categories:
- 10k<n<100K
source_datasets:
- original
task_categories:
- video-captionning
task_ids:
- closed-domain-qa
---
# Dataset Card for ActivityNet Captions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://cs.stanford.edu/people/ranjaykrishna/densevid/
- **Paper:** https://arxiv.org/abs/1705.00754
### Dataset Summary
The ActivityNet Captions dataset connects videos to a series of temporally annotated sentence descriptions. Each sentence covers an unique segment of the video, describing multiple events that occur. These events may occur over very long or short periods of time and are not limited in any capacity, allowing them to co-occur. On average, each of the 20k videos contains 3.65 temporally localized sentences, resulting in a total of 100k sentences. We find that the number of sentences per video follows a relatively normal distribution. Furthermore, as the video duration increases, the number of sentences also increases. Each sentence has an average length of 13.48 words, which is also normally distributed. You can find more details of the dataset under the ActivityNet Captions Dataset section, and under supplementary materials in the paper.
### Languages
The captions in the dataset are in English.
## Dataset Structure
### Data Fields
- `video_id` : `str` unique identifier for the video
- `video_path`: `str` Path to the video file
-`duration`: `float32` Duration of the video
- `captions_starts`: `List_float32` List of timestamps denoting the time at which each caption starts
- `captions_ends`: `List_float32` List of timestamps denoting the time at which each caption ends
- `en_captions`: `list_str` List of english captions describing parts of the video
### Data Splits
| |train |validation| test | Overall |
|-------------|------:|---------:|------:|------:|
|# of videos|10,009 |4,917 |4,885 |19,811 |
### Annotations
Quoting [ActivityNet Captions' paper](https://arxiv.org/abs/1705.00754): \
"Each annotation task was divided into two steps: (1)
Writing a paragraph describing all major events happening
in the videos in a paragraph, with each sentence of the paragraph describing one event, and (2) Labeling the
start and end time in the video in which each sentence in the
paragraph event occurred."
### Who annotated the dataset?
Amazon Mechnical Turk annotators
### Personal and Sensitive Information
Nothing specifically mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@InProceedings{tgif-cvpr2016,
@inproceedings{krishna2017dense,
title={Dense-Captioning Events in Videos},
author={Krishna, Ranjay and Hata, Kenji and Ren, Frederic and Fei-Fei, Li and Niebles, Juan Carlos},
booktitle={International Conference on Computer Vision (ICCV)},
year={2017}
}
```
### Contributions
Thanks to [@leot13](https://github.com/leot13) for adding this dataset. |
HuggingFaceH4/capybara | ---
dataset_info:
features:
- name: source
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: num_turns
dtype: int64
splits:
- name: train_sft
num_bytes: 71928160.5765338
num_examples: 15806
- name: test_sft
num_bytes: 910137.4234662001
num_examples: 200
download_size: 37692145
dataset_size: 72838298.0
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
---
# Dataset Card for Capybara
This is a formatted version of [`LDJnr/Capybara`](https://huggingface.co/datasets/LDJnr/Capybara) to store the conversations in the same format as the OpenAI SDK. |
Nicolas-BZRD/KALI_opendata | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 768806851
num_examples: 430667
download_size: 298891657
dataset_size: 768806851
license: odc-by
language:
- fr
tags:
- legal
pretty_name: Conventions collectives nationales
size_categories:
- 100K<n<1M
---
# KALI (Conventions collectives nationales)
[All collective agreements and related texts](https://echanges.dila.gouv.fr/OPENDATA/KALI/). The database also provides access to certain national collective agreements that have not been extended, as well as regional and departmental collective agreements, whether or not they have been extended. The associated texts include agreements relating to a collective agreement, salaries and extension decrees.
The data is updated from the Bulletin officiel "Conventions collectives" published under the responsibility of the Ministry of Labour, Solidarity and the Civil Service and distributed by the DILA. |
vwxyzjn/openhermes-dev__mistralai_Mistral-7B-Instruct-v0.1__1706885528 | ---
dataset_info:
features:
- name: source
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: title
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: idx
dtype: 'null'
- name: id
dtype: 'null'
- name: model
dtype: 'null'
- name: topic
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: model_name
dtype: 'null'
- name: language
dtype: 'null'
- name: views
dtype: 'null'
- name: hash
dtype: 'null'
- name: category
dtype: string
- name: prompt
dtype: string
- name: chosen_policy
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_policy
dtype: string
splits:
- name: train_prefs
num_bytes: 129848
num_examples: 23
- name: test_prefs
num_bytes: 1795
num_examples: 1
download_size: 130692
dataset_size: 131643
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: test_prefs
path: data/test_prefs-*
---
|
Nadav/pixel_glue_cola_noisy_ocr | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 2874918
num_examples: 42755
- name: validation
num_bytes: 70427
num_examples: 1043
download_size: 1642630
dataset_size: 2945345
---
# Dataset Card for "pixel_glue_cola_noisy_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
burtenshaw/DIBT_prompts_ranked_synthetic_mistral_large | ---
dataset_info:
features:
- name: input
dtype: string
- name: quality
list:
- name: status
dtype: string
- name: user_id
dtype: string
- name: value
dtype: string
- name: metadata
dtype: string
- name: avg_rating
dtype: float64
- name: num_responses
dtype: int64
- name: agreement_ratio
dtype: float64
- name: raw_responses
sequence: int64
- name: kind
dtype: string
- name: cluster_description
dtype: string
- name: topic
dtype: string
- name: generation_model
sequence: string
- name: generation_prompt
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_generation_responses
sequence: string
- name: rating
sequence: float64
- name: rationale
sequence: string
- name: generations
dtype: 'null'
splits:
- name: train
num_bytes: 21712081.42857143
num_examples: 10000
download_size: 8521094
dataset_size: 21712081.42857143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
carnival13/rbrt_test_val_lrg2 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 148079605
num_examples: 104550
download_size: 32715970
dataset_size: 148079605
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_test_val_lrg2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tessiw/german_OpenOrca15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 420403902
num_examples: 250000
download_size: 241528835
dataset_size: 420403902
---
# Dataset Card for "german_OpenOrca15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/shu_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shu/黍 (Arknights)
This is the dataset of shu/黍 (Arknights), containing 416 images and their tags.
The core tags of this character are `horns, multicolored_hair, dragon_horns, long_hair, dragon_girl, pointy_ears, grey_hair, blue_hair, streaked_hair, earrings, blonde_hair, tassel_earrings, tassel, hair_between_eyes, colored_skin, very_long_hair, hair_intakes, tail, blue_eyes, hair_ornament, dragon_tail, grey_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 416 | 1.00 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shu_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 416 | 820.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shu_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1074 | 1.52 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shu_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shu_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, bare_shoulders, cowboy_shot, long_sleeves, looking_at_viewer, necklace, off_shoulder, open_jacket, simple_background, solo, strapless_shirt, white_background, white_jacket, white_pants, white_shirt, closed_mouth, smile, collarbone, holding, hair_stick, branch, red_pupils |
| 1 | 7 |  |  |  |  |  | 1girl, bare_shoulders, closed_mouth, hair_stick, off_shoulder, open_jacket, simple_background, solo, strapless_shirt, white_background, white_jacket, white_shirt, looking_at_viewer, necklace, smile, long_sleeves, upper_body |
| 2 | 5 |  |  |  |  |  | 1girl, bare_shoulders, closed_mouth, collarbone, looking_at_viewer, necklace, off_shoulder, open_jacket, simple_background, smile, solo, upper_body, white_background, white_jacket, white_shirt, strapless_shirt |
| 3 | 5 |  |  |  |  |  | 1girl, bare_shoulders, bead_bracelet, closed_mouth, hair_stick, long_sleeves, looking_at_viewer, necklace, off_shoulder, open_jacket, smile, solo, strapless_shirt, white_jacket, white_pants, white_shirt, cowboy_shot, holding_sword, simple_background, white_background, red_pupils, small_breasts, colored_tips |
| 4 | 5 |  |  |  |  |  | 1girl, bare_shoulders, day, long_sleeves, off_shoulder, open_jacket, outdoors, smile, solo, strapless_shirt, white_jacket, white_pants, white_shirt, blue_sky, closed_mouth, cowboy_shot, necklace, green_eyes, hair_stick, looking_at_viewer, small_breasts, hand_up, mountainous_horizon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | cowboy_shot | long_sleeves | looking_at_viewer | necklace | off_shoulder | open_jacket | simple_background | solo | strapless_shirt | white_background | white_jacket | white_pants | white_shirt | closed_mouth | smile | collarbone | holding | hair_stick | branch | red_pupils | upper_body | bead_bracelet | holding_sword | small_breasts | colored_tips | day | outdoors | blue_sky | green_eyes | hand_up | mountainous_horizon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:--------------|:---------------|:--------------------|:-----------|:---------------|:--------------|:--------------------|:-------|:------------------|:-------------------|:---------------|:--------------|:--------------|:---------------|:--------|:-------------|:----------|:-------------|:---------|:-------------|:-------------|:----------------|:----------------|:----------------|:---------------|:------|:-----------|:-----------|:-------------|:----------|:----------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | X | X | X | | X | X | X | | | X | | | X | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | | | X | X | X | X | X | X | X | X | X | | X | X | X | X | | | | | X | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | X | | X | | X | X | X | X | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | | X | X | X | X | X | | | X | | | | | | X | | X | X | X | X | X | X |
|
NavaneethNivol/resume_generation_dataset | ---
dataset_info:
features:
- name: category
dtype: string
- name: job_key
dtype: string
- name: job_description
dtype: string
- name: jd_keywords
dtype: string
- name: original_bullet_points
dtype: string
- name: target_bullet_points
dtype: string
splits:
- name: train
num_bytes: 37588539.23105094
num_examples: 6307
- name: test
num_bytes: 16115333.768949062
num_examples: 2704
download_size: 7016381
dataset_size: 53703873.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Brendan/babylm-processed | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 6405468
num_examples: 2077
- name: valid
num_bytes: 1612932
num_examples: 523
download_size: 2868688
dataset_size: 8018400
---
# Dataset Card for "babylm-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/2c8fb846 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1319
dataset_size: 182
---
# Dataset Card for "2c8fb846"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shidowake/glaive-code-assistant-v1-sharegpt-format_split_15 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 10503837.603832223
num_examples: 6805
download_size: 5133492
dataset_size: 10503837.603832223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xusenlin/tnews | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 4421883
num_examples: 53360
- name: validation
num_bytes: 830536
num_examples: 10000
download_size: 3695633
dataset_size: 5252419
---
# Dataset Card for "tnews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
multi-train/npr_1107 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: task
dtype: string
- name: instruction
struct:
- name: query
dtype: string
- name: pos
dtype: string
- name: neg
dtype: string
splits:
- name: train
num_bytes: 474583082
num_examples: 200000
download_size: 259084905
dataset_size: 474583082
---
# Dataset Card for "npr_1107"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nyanko7/coco-hosted | ---
license: openrail
---
Usage:
```
from datasets import load_dataset
coco_dataset = load_dataset("nyanko7/coco-hosted")
```
Each instance has the following structure:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile>,
'filepath': 'COCO_val2014_000000522418.jpg',
'sentids': [681330, 686718, 688839, 693159, 693204],
'filename': 'COCO_val2014_000000522418.jpg',
'imgid': 1,
'split': 'restval',
'sentences': {
'tokens': ['a', 'woman', 'wearing', 'a', 'net', 'on', 'her', 'head', 'cutting', 'a', 'cake'],
'raw': 'A woman wearing a net on her head cutting a cake. ',
'imgid': 1,
'sentid': 681330
},
'cocoid': 522418
}
``` |
result-kand2-sdxl-wuerst-karlo/7a9ac406 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 162
num_examples: 10
download_size: 1319
dataset_size: 162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "7a9ac406"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DevAibest/alpaca-geotherm-data | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1899277198
num_examples: 579050
- name: test
num_bytes: 104795579
num_examples: 32169
- name: valid
num_bytes: 104865897
num_examples: 32170
download_size: 1071239875
dataset_size: 2108938674
---
# Dataset Card for "alpaca-geotherm-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
speed1/rockgerio | ---
license: openrail
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_94 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1244956484.0
num_examples: 242587
download_size: 1275931622
dataset_size: 1244956484.0
---
# Dataset Card for "chunk_94"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-7B-Instruct | ---
pretty_name: Evaluation run of togethercomputer/RedPajama-INCITE-7B-Instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [togethercomputer/RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-7B-Instruct\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T05:42:36.863532](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-7B-Instruct/blob/main/results_2023-10-19T05-42-36.863532.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.0003476179896857104,\n \"f1\": 0.04208578020134259,\n\
\ \"f1_stderr\": 0.00114625984545935,\n \"acc\": 0.3327435280488615,\n\
\ \"acc_stderr\": 0.008428433474529594\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.0003476179896857104,\n\
\ \"f1\": 0.04208578020134259,\n \"f1_stderr\": 0.00114625984545935\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01592115238817286,\n \
\ \"acc_stderr\": 0.0034478192723889985\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6495659037095501,\n \"acc_stderr\": 0.013409047676670187\n\
\ }\n}\n```"
repo_url: https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T05_42_36.863532
path:
- '**/details_harness|drop|3_2023-10-19T05-42-36.863532.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T05-42-36.863532.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T05_42_36.863532
path:
- '**/details_harness|gsm8k|5_2023-10-19T05-42-36.863532.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T05-42-36.863532.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:41:06.835084.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:41:06.835084.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:41:06.835084.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T05_42_36.863532
path:
- '**/details_harness|winogrande|5_2023-10-19T05-42-36.863532.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T05-42-36.863532.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_41_06.835084
path:
- results_2023-07-19T16:41:06.835084.parquet
- split: 2023_10_19T05_42_36.863532
path:
- results_2023-10-19T05-42-36.863532.parquet
- split: latest
path:
- results_2023-10-19T05-42-36.863532.parquet
---
# Dataset Card for Evaluation run of togethercomputer/RedPajama-INCITE-7B-Instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [togethercomputer/RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-7B-Instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T05:42:36.863532](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-7B-Instruct/blob/main/results_2023-10-19T05-42-36.863532.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.0003476179896857104,
"f1": 0.04208578020134259,
"f1_stderr": 0.00114625984545935,
"acc": 0.3327435280488615,
"acc_stderr": 0.008428433474529594
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.0003476179896857104,
"f1": 0.04208578020134259,
"f1_stderr": 0.00114625984545935
},
"harness|gsm8k|5": {
"acc": 0.01592115238817286,
"acc_stderr": 0.0034478192723889985
},
"harness|winogrande|5": {
"acc": 0.6495659037095501,
"acc_stderr": 0.013409047676670187
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
alfredplpl/wikipedia-qa-ja-100k | ---
language:
- ja
license: cc-by-sa-3.0
size_categories:
- 100K<n<1M
task_categories:
- question-answering
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 29293461
num_examples: 106876
download_size: 0
dataset_size: 29293461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia-qa-ja-100k"
# Original Dataset
- hpprc/wikipedia-20240101
# Procedure
- Extract the first line of the title from the dataset.
- Generate the answer by summizing the line using LLM:
- Input RAG-like prompt to CALM 2 7B Chat.
- Format the response.
# RAG-like Prompt
```python
f"""USER: {title}とはなんですか?次の文章を参考に一言でまとめてください。{text}
ASSISTANT: """
``` |
Wikimedians/wikidata-all | ---
license: cc0-1.0
task_categories:
- graph-ml
pretty_name: Wikidata - All Entities
size_categories:
- 100M<n<1B
---
# Wikidata - All Entities
This Hugging Face Data Set contains the entirety of [Wikidata](https://wikidata.org/) as of the date listed below. Wikidata is a freely licensed structured knowledge graph following the wiki model of user contributions. If you build on this data please consider [contributing back to Wikidata](https://www.wikidata.org/wiki/Special:MyLanguage/Wikidata:Contribute).
For more on the size and other statistics of Wikidata, see: [Special:Statistics](https://www.wikidata.org/wiki/Special:Statistics).
***Current Dump as of:*** 2024-03-04
## Original Source
The data contained in this repository is retrieved from [dumps.wikimedia.org](https://dumps.wikimedia.org/wikidatawiki/entities/) with documentation available in [Wikidata:Database download](https://www.wikidata.org/wiki/Wikidata:Database_download).
TODO: Convert Wikidata's JSON to Parquet.
## License
Wikidata is licensed [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/). |
anton-l/superb | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: SUPERB
size_categories:
- unknown
source_datasets:
- original
- extended|librispeech_asr
- extended|other-librimix
- extended|other-speech_commands
task_categories:
- speech-processing
task_ids:
- automatic-speech-recognition
- phoneme-recognition
- keyword-spotting
- query-by-example-spoken-term-detection
- speaker-identification
- automatic-speaker-verification
- speaker-diarization
- intent-classification
- slot-filling
- emotion-recognition
---
# Dataset Card for SUPERB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://superbbenchmark.org](http://superbbenchmark.org)
- **Repository:** [https://github.com/s3prl/s3prl](https://github.com/s3prl/s3prl)
- **Paper:** [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [Lewis Tunstall](mailto:lewis@huggingface.co) and [Albert Villanova](mailto:albert@huggingface.co)
### Dataset Summary
SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data.
### Supported Tasks and Leaderboards
The SUPERB leaderboard can be found here https://superbbenchmark.org/leaderboard and consists of the following tasks:
#### pr
Phoneme Recognition (PR) transcribes an utterance into the smallest content units. This task includes alignment modeling to avoid potentially inaccurate forced alignment. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/dev-clean/test-clean subsets are adopted in SUPERB for training/validation/testing. Phoneme transcriptions are obtained from the LibriSpeech official g2p-model-5 and the conversion script in Kaldi librispeech s5 recipe. The evaluation metric is phone error rate (PER).
#### asr
Automatic Speech Recognition (ASR) transcribes utterances into words. While PR analyzes the improvement in modeling phonetics, ASR reflects the significance of the improvement in a real-world scenario. [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) train-clean-100/devclean/test-clean subsets are used for training/validation/testing. The evaluation metric is word error rate (WER).
#### ks
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC)
##### Example of usage:
Use these auxillary functions to:
- load the audio file into an audio data array
- sample from long `_silence_` audio clips
For other examples of handling long `_silence_` clips see the [S3PRL](https://github.com/s3prl/s3prl/blob/099ce807a6ffa6bf2482ceecfcaf83dea23da355/s3prl/downstream/speech_commands/dataset.py#L80)
or [TFDS](https://github.com/tensorflow/datasets/blob/6b8cfdb7c3c0a04e731caaa8660ce948d0a67b1e/tensorflow_datasets/audio/speech_commands.py#L143) implementations.
```python
def map_to_array(example):
import soundfile as sf
speech_array, sample_rate = sf.read(example["file"])
example["speech"] = speech_array
example["sample_rate"] = sample_rate
return example
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
#### qbe
Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms.
#### ic
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of speakers. SUPERB uses the [Fluent Speech Commands dataset](https://github.com/s3prl/s3prl/tree/master/downstream#ic-intent-classification---fluent-speech-commands), where each utterance is tagged with three intent labels: action, object, and location. The evaluation metric is accuracy (ACC).
#### sf
Slot Filling (SF) predicts a sequence of semantic slot-types from an utterance, like a slot-type FromLocation for a spoken word Taipei, which is known as a slot-value. Both slot-types and slot-values are essential for an SLU system to function. The evaluation metrics thus include slot-type F1 score and slotvalue CER. [Audio SNIPS](https://github.com/s3prl/s3prl/tree/master/downstream#sf-end-to-end-slot-filling) is adopted, which synthesized multi-speaker utterances for SNIPS. Following the standard split in SNIPS, US-accent speakers are further selected for training, and others are for validation/testing.
#### si
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class classification, where speakers are in the same predefined set for both training and testing. The widely used [VoxCeleb1 dataset](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) is adopted, and the evaluation metric is accuracy (ACC).
#### asv
Automatic Speaker Verification (ASV) verifies whether the speakers of a pair of utterances match as a binary classification, and speakers in the testing set may not appear in the training set. Thus, ASV is more challenging than SID. VoxCeleb1 is used without VoxCeleb2 training data and noise augmentation. The evaluation metric is equal error rate (EER).
#### sd
Speaker Diarization (SD) predicts *who is speaking when* for each timestamp, and multiple speakers can speak simultaneously. The model has to encode rich speaker characteristics for each frame and should be able to represent mixtures of signals. [LibriMix](https://github.com/s3prl/s3prl/tree/master/downstream#sd-speaker-diarization) is adopted where LibriSpeech train-clean-100/dev-clean/test-clean are used to generate mixtures for training/validation/testing. We focus on the two-speaker scenario as the first step. The time-coded speaker labels were generated using alignments from Kaldi LibriSpeech ASR model. The evaluation metric is diarization error rate (DER).
##### Example of usage
Use these auxiliary functions to:
- load the audio file into an audio data array
- generate the label array
```python
def load_audio_file(example, frame_shift=160):
import soundfile as sf
example["array"], example["sample_rate"] = sf.read(
example["file"], start=example["start"] * frame_shift, stop=example["end"] * frame_shift
)
return example
def generate_label(example, frame_shift=160, num_speakers=2, rate=16000):
import numpy as np
start = example["start"]
end = example["end"]
frame_num = end - start
speakers = sorted({speaker["speaker_id"] for speaker in example["speakers"]})
label = np.zeros((frame_num, num_speakers), dtype=np.int32)
for speaker in example["speakers"]:
speaker_index = speakers.index(speaker["speaker_id"])
start_frame = np.rint(speaker["start"] * rate / frame_shift).astype(int)
end_frame = np.rint(speaker["end"] * rate / frame_shift).astype(int)
rel_start = rel_end = None
if start <= start_frame < end:
rel_start = start_frame - start
if start < end_frame <= end:
rel_end = end_frame - start
if rel_start is not None or rel_end is not None:
label[rel_start:rel_end, speaker_index] = 1
example["label"] = label
return example
```
#### er
Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset [IEMOCAP](https://github.com/s3prl/s3prl/tree/master/downstream#er-emotion-recognition) is adopted, and we follow the conventional evaluation protocol: we drop the unbalance emotion classes to leave the final four classes with a similar amount of data points and cross-validates on five folds of the standard splits. The evaluation metric is accuracy (ACC).
### Languages
The language data in SUPERB is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
An example from each split looks like:
```python
{'chapter_id': 1240,
'file': 'path/to/file.flac',
'audio': {'path': 'path/to/file.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '103-1240-0000',
'speaker_id': 103,
'text': 'CHAPTER ONE MISSUS RACHEL LYNDE IS SURPRISED MISSUS RACHEL LYNDE '
'LIVED JUST WHERE THE AVONLEA MAIN ROAD DIPPED DOWN INTO A LITTLE '
'HOLLOW FRINGED WITH ALDERS AND LADIES EARDROPS AND TRAVERSED BY A '
'BROOK'}
```
#### ks
An example from each split looks like:
```python
{
'file': '/path/yes/af7a8296_nohash_1.wav',
'audio': {'path': '/path/yes/af7a8296_nohash_1.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 0 # 'yes'
}
```
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
```python
{
'file': "/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav",
'audio': {'path': '/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'speaker_id': '2BqVo8kVB2Skwgyb',
'text': 'Turn the bedroom lights off',
'action': 3, # 'deactivate'
'object': 7, # 'lights'
'location': 0 # 'bedroom'
}
```
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
```python
{
'file': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'audio': {'path': '/path/wav/id10003/na8-QEFmj44/00003.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'label': 2 # 'id10003'
}
```
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
An example from each split looks like:
```python
{
'record_id': '1578-6379-0038_6415-111615-0009',
'file': 'path/to/file.wav',
'audio': {'path': 'path/to/file.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'start': 0,
'end': 1590,
'speakers': [
{'speaker_id': '1578', 'start': 28, 'end': 657},
{'speaker_id': '6415', 'start': 28, 'end': 1576}
]
}
```
#### er
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
####Note abouth the `audio` fields
When accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text` (`string`): The transcription of the audio file.
- `speaker_id` (`integer`): A unique ID of the speaker. The same speaker id can be found for multiple data samples.
- `chapter_id` (`integer`): ID of the audiobook chapter which includes the transcription.
- `id` (`string`): A unique ID of the data sample.
#### ks
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the spoken command. Possible values:
- `0: "yes", 1: "no", 2: "up", 3: "down", 4: "left", 5: "right", 6: "on", 7: "off", 8: "stop", 9: "go", 10: "_silence_", 11: "_unknown_"`
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `speaker_id` (`string`): ID of the speaker.
- `text` (`string`): Transcription of the spoken command.
- `action` (`ClassLabel`): Label of the command's action. Possible values:
- `0: "activate", 1: "bring", 2: "change language", 3: "deactivate", 4: "decrease", 5: "increase"`
- `object` (`ClassLabel`): Label of the command's object. Possible values:
- `0: "Chinese", 1: "English", 2: "German", 3: "Korean", 4: "heat", 5: "juice", 6: "lamp", 7: "lights", 8: "music", 9: "newspaper", 10: "none", 11: "shoes", 12: "socks", 13: "volume"`
- `location` (`ClassLabel`): Label of the command's location. Possible values:
- `0: "bedroom", 1: "kitchen", 2: "none", 3: "washroom"`
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label (ID) of the speaker. Possible values:
- `0: "id10001", 1: "id10002", 2: "id10003", ..., 1250: "id11251"`
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data fields in all splits are:
- `record_id` (`string`): ID of the record.
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `start` (`integer`): Start frame of the audio.
- `end` (`integer`): End frame of the audio.
- `speakers` (`list` of `dict`): List of speakers in the audio. Each item contains the fields:
- `speaker_id` (`string`): ID of the speaker.
- `start` (`integer`): Frame when the speaker starts speaking.
- `end` (`integer`): Frame when the speaker stops speaking.
#### er
- `file` (`string`): Path to the WAV audio file.
- `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `label` (`ClassLabel`): Label of the speech emotion. Possible values:
- `0: "neu", 1: "hap", 2: "ang", 3: "sad"`
### Data Splits
#### pr
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### asr
| | train | validation | test |
|-----|------:|-----------:|-----:|
| asr | 28539 | 2703 | 2620 |
#### ks
| | train | validation | test |
|----|------:|-----------:|-----:|
| ks | 51094 | 6798 | 3081 |
#### qbe
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### ic
| | train | validation | test |
|----|------:|-----------:|-----:|
| ic | 23132 | 3118 | 3793 |
#### sf
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### si
| | train | validation | test |
|----|-------:|-----------:|-----:|
| si | 138361 | 6904 | 8251 |
#### asv
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sd
The data is split into "train", "dev" and "test" sets, each containing the following number of examples:
| | train | dev | test |
|----|------:|-----:|-----:|
| sd | 13901 | 3014 | 3002 |
#### er
The data is split into 5 sets intended for 5-fold cross-validation:
| | session1 | session2 | session3 | session4 | session5 |
|----|---------:|---------:|---------:|---------:|---------:|
| er | 1085 | 1023 | 1151 | 1031 | 1241 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{DBLP:journals/corr/abs-2105-01051,
author = {Shu{-}Wen Yang and
Po{-}Han Chi and
Yung{-}Sung Chuang and
Cheng{-}I Jeff Lai and
Kushal Lakhotia and
Yist Y. Lin and
Andy T. Liu and
Jiatong Shi and
Xuankai Chang and
Guan{-}Ting Lin and
Tzu{-}Hsien Huang and
Wei{-}Cheng Tseng and
Ko{-}tik Lee and
Da{-}Rong Liu and
Zili Huang and
Shuyan Dong and
Shang{-}Wen Li and
Shinji Watanabe and
Abdelrahman Mohamed and
Hung{-}yi Lee},
title = {{SUPERB:} Speech processing Universal PERformance Benchmark},
journal = {CoRR},
volume = {abs/2105.01051},
year = {2021},
url = {https://arxiv.org/abs/2105.01051},
archivePrefix = {arXiv},
eprint = {2105.01051},
timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Note that each SUPERB dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova) and [@anton-l](https://github.com/anton-l) for adding this dataset.
|
CyberHarem/eagle_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of eagle/イーグル/鹰 (Azur Lane)
This is the dataset of eagle/イーグル/鹰 (Azur Lane), containing 10 images and their tags.
The core tags of this character are `breasts, bangs, large_breasts, long_hair, hairband, grey_hair, yellow_eyes, braid`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 10 | 14.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eagle_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 10 | 7.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eagle_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 24 | 16.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eagle_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 10 | 12.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eagle_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 24 | 24.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/eagle_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/eagle_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, cleavage, looking_at_viewer, solo, black_bra, black_pantyhose, closed_mouth, pencil_skirt, white_shirt, black_skirt, holding, necklace, bra_peek, collarbone, miniskirt, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cleavage | looking_at_viewer | solo | black_bra | black_pantyhose | closed_mouth | pencil_skirt | white_shirt | black_skirt | holding | necklace | bra_peek | collarbone | miniskirt | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------------|:-------|:------------|:------------------|:---------------|:---------------|:--------------|:--------------|:----------|:-----------|:-----------|:-------------|:------------|:----------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_mosaicml__mpt-7b-8k-chat | ---
pretty_name: Evaluation run of mosaicml/mpt-7b-8k-chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mosaicml/mpt-7b-8k-chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mosaicml__mpt-7b-8k-chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T07:55:34.525118](https://huggingface.co/datasets/open-llm-leaderboard/details_mosaicml__mpt-7b-8k-chat/blob/main/results_2023-10-15T07-55-34.525118.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.00034761798968571076,\n \"f1\": 0.059134857382550615,\n\
\ \"f1_stderr\": 0.0013463403076722808,\n \"acc\": 0.37715604548421977,\n\
\ \"acc_stderr\": 0.00919810862838236\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.00034761798968571076,\n\
\ \"f1\": 0.059134857382550615,\n \"f1_stderr\": 0.0013463403076722808\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04397270659590599,\n \
\ \"acc_stderr\": 0.005647666449126459\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7103393843725335,\n \"acc_stderr\": 0.012748550807638261\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mosaicml/mpt-7b-8k-chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|arc:challenge|25_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|arc:challenge|25_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T07_55_34.525118
path:
- '**/details_harness|drop|3_2023-10-15T07-55-34.525118.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T07-55-34.525118.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T07_55_34.525118
path:
- '**/details_harness|gsm8k|5_2023-10-15T07-55-34.525118.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T07-55-34.525118.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hellaswag|10_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hellaswag|10_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T22:52:00.675121.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-39-26.235100.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_22T22_52_00.675121
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-22T22:52:00.675121.parquet'
- split: 2023_10_03T22_39_26.235100
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T22-39-26.235100.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-03T22-39-26.235100.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T07_55_34.525118
path:
- '**/details_harness|winogrande|5_2023-10-15T07-55-34.525118.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T07-55-34.525118.parquet'
- config_name: results
data_files:
- split: 2023_10_03T22_39_26.235100
path:
- results_2023-10-03T22-39-26.235100.parquet
- split: 2023_10_15T07_55_34.525118
path:
- results_2023-10-15T07-55-34.525118.parquet
- split: latest
path:
- results_2023-10-15T07-55-34.525118.parquet
---
# Dataset Card for Evaluation run of mosaicml/mpt-7b-8k-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mosaicml/mpt-7b-8k-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [mosaicml/mpt-7b-8k-chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mosaicml__mpt-7b-8k-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T07:55:34.525118](https://huggingface.co/datasets/open-llm-leaderboard/details_mosaicml__mpt-7b-8k-chat/blob/main/results_2023-10-15T07-55-34.525118.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571076,
"f1": 0.059134857382550615,
"f1_stderr": 0.0013463403076722808,
"acc": 0.37715604548421977,
"acc_stderr": 0.00919810862838236
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571076,
"f1": 0.059134857382550615,
"f1_stderr": 0.0013463403076722808
},
"harness|gsm8k|5": {
"acc": 0.04397270659590599,
"acc_stderr": 0.005647666449126459
},
"harness|winogrande|5": {
"acc": 0.7103393843725335,
"acc_stderr": 0.012748550807638261
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
LambdaTests/VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_23_10000000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 192706
num_examples: 6699
download_size: 122845
dataset_size: 192706
---
# Dataset Card for "VQAv2Validation_ViT_H_14_A_T_C_Q_benchmarks_partition_global_23_10000000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lakshmiv/cool_new_dataset | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 3152
num_examples: 5
download_size: 7476
dataset_size: 3152
---
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vinnyh589/PersonagensJogos | ---
license: unknown
---
|
cakiki/stack-smol-xxl | ---
dataset_info:
features:
- name: hexsha
dtype: string
- name: size
dtype: int64
- name: ext
dtype: string
- name: lang
dtype: string
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_repo_head_hexsha
dtype: string
- name: max_stars_repo_licenses
sequence: string
- name: max_stars_count
dtype: int64
- name: max_stars_repo_stars_event_min_datetime
dtype: string
- name: max_stars_repo_stars_event_max_datetime
dtype: string
- name: max_issues_repo_path
dtype: string
- name: max_issues_repo_name
dtype: string
- name: max_issues_repo_head_hexsha
dtype: string
- name: max_issues_repo_licenses
sequence: string
- name: max_issues_count
dtype: int64
- name: max_issues_repo_issues_event_min_datetime
dtype: string
- name: max_issues_repo_issues_event_max_datetime
dtype: string
- name: max_forks_repo_path
dtype: string
- name: max_forks_repo_name
dtype: string
- name: max_forks_repo_head_hexsha
dtype: string
- name: max_forks_repo_licenses
sequence: string
- name: max_forks_count
dtype: int64
- name: max_forks_repo_forks_event_min_datetime
dtype: string
- name: max_forks_repo_forks_event_max_datetime
dtype: string
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
splits:
- name: train
num_bytes: 78577965159
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
license: other
language:
- code
---
# Dataset Card for "stack-smol-xxl"
This is a subset of the [deduplicated Stack dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup)
It was generated like so:
```python
from datasets import load_dataset, Dataset
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
"isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
"cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
"makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
"systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
"tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
"glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
"markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]
def dset_gen():
for language in languages:
dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
sample = dset.take(250_000)
for row in sample:
yield row
dset = Dataset.from_generator(dset_gen)
```
## Dataset Structure
```
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
```
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
### Data Fields
- `content` (string): the content of the file.
- `size` (integer): size of the uncompressed file.
- `lang` (string): the programming language.
- `ext` (string): file extension
- `avg_line_length` (float): the average line-length of the file.
- `max_line_length` (integer): the maximum line-length of the file.
- `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters.
- `hexsha` (string): unique git hash of file
- `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head
- `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository
- `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
|
sin3768/sanaDS | ---
license: unknown
---
|
Seongill/nq_cbr_total | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: id
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: case_question
dtype: string
- name: case_context
dtype: string
- name: case_answer
dtype: string
splits:
- name: train
num_bytes: 654471488
num_examples: 87925
- name: test
num_bytes: 26921812
num_examples: 3610
download_size: 386473537
dataset_size: 681393300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
3ee/regularization-architecture | ---
license: mit
tags:
- stable-diffusion
- regularization-images
- text-to-image
- image-to-image
- dreambooth
- class-instance
- preservation-loss-training
- architecture
---
# Architecture Regularization Images
A collection of regularization & class instance datasets of architecture for the Stable Diffusion 1.5 to use for DreamBooth prior preservation loss training. |
alexandrainst/audio_test_dataset | ---
dataset_info:
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
splits:
- name: train
num_bytes: 108571
num_examples: 5
- name: validation
num_bytes: 116850
num_examples: 5
- name: test
num_bytes: 78943
num_examples: 5
- name: other
num_bytes: 101436
num_examples: 5
- name: invalidated
num_bytes: 156925
num_examples: 5
download_size: 590682
dataset_size: 562725
license: cc0-1.0
language:
- da
size_categories:
- n<1K
---
# Dataset Card for "audio_test_dataset"
This dataset consists of the first 5 samples of [mozilla-foundation/common_voice_13_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) and is only used for unit testing. |
wojemann/tars_boolq_int | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: label
dtype: int64
- name: passage
dtype: string
splits:
- name: train
num_bytes: 5903821
num_examples: 9427
- name: validation
num_bytes: 2023933
num_examples: 3270
download_size: 4944865
dataset_size: 7927754
---
# Dataset Card for "tars_boolq_int"
## Designed for fine-tuning llama-2 models that have uninitialized weights
#### This dataset is a replication of the original boolq (https://huggingface.co/datasets/boolq) dataset with the outcome variable (label) set to be an integer so that the model can automatically generate a loss value per batch.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_50_1713058522 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 22659373
num_examples: 56636
download_size: 11452778
dataset_size: 22659373
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/000def32 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1341
dataset_size: 184
---
# Dataset Card for "000def32"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indra-inc/rvl_cdip_train600_valid100_ground_truth | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: Image_id
dtype:
class_label:
names:
'0': advertisement
'1': budget
'2': email
'3': file_folder
'4': form
'5': handwritten
'6': invoice
'7': letter
'8': memo
'9': news_article
'10': presentation
'11': questionnaire
'12': resume
'13': scientific_publication
'14': scientific_report
'15': specification
- name: Image_raw
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 1313222936.0
num_examples: 9600
- name: valid
num_bytes: 180924349.4
num_examples: 1600
download_size: 1281715268
dataset_size: 1494147285.4
---
# Dataset Card for "rvl_cdip_train600_valid100_doc_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fishball02/anime-subtitle-dragon-ball | ---
dataset_info:
features:
- name: episode
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1583252
num_examples: 27187
download_size: 1013577
dataset_size: 1583252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
skar01/mmm | ---
license: bigscience-openrail-m
---
|
weijie210/UFB_preference_iter_0_all | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: pre_score
dtype: float64
- name: post_score
dtype: float64
- name: pre_critique
dtype: string
- name: post_critique
dtype: string
- name: score_diff
dtype: float64
splits:
- name: train_sft
num_bytes: 275680076
num_examples: 55716
- name: test_sft
num_bytes: 4516792
num_examples: 914
download_size: 135686384
dataset_size: 280196868
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
---
|
Saxo/total_ko_train_set_small_2_basic | ---
license: apache-2.0
---
|
loubnabnl/preprocessed-issues | ---
dataset_info:
features:
- name: repo
dtype: string
- name: org
dtype: string
- name: issue_id
dtype: int64
- name: issue_number
dtype: int64
- name: pull_request
struct:
- name: number
dtype: int64
- name: repo
dtype: string
- name: user_login
dtype: string
- name: events
list:
- name: action
dtype: string
- name: author
dtype: string
- name: comment_id
dtype: float64
- name: datetime
dtype: int64
- name: masked_author
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: user_count
dtype: int64
- name: event_count
dtype: int64
- name: text_size
dtype: int64
- name: bot_issue
dtype: bool
- name: modified_by_bot
dtype: bool
- name: text_size_no_bots
dtype: int64
- name: modified_usernames
dtype: bool
splits:
- name: train
num_bytes: 15868077
num_examples: 7351
download_size: 7504145
dataset_size: 15868077
---
# Dataset Card for "preprocessed-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FINNUMBER/FINCH_TRAIN_ARI_NEWFORMAT | ---
dataset_info:
features:
- name: task
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 15037999
num_examples: 5365
download_size: 8226477
dataset_size: 15037999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Joycean0301/test_dataset_image | ---
dataset_info:
features:
- name: image
dtype: image
- name: description
dtype: string
splits:
- name: train
num_bytes: 37547.0
num_examples: 1
download_size: 38962
dataset_size: 37547.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2 | ---
pretty_name: Evaluation run of NurtureAI/neural-chat-11b-v3-2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [NurtureAI/neural-chat-11b-v3-2](https://huggingface.co/NurtureAI/neural-chat-11b-v3-2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-08T01:21:45.753346](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2/blob/main/results_2023-12-08T01-21-45.753346.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6233413317621571,\n\
\ \"acc_stderr\": 0.0328873097315513,\n \"acc_norm\": 0.6277933709363942,\n\
\ \"acc_norm_stderr\": 0.0335567202427567,\n \"mc1\": 0.4394124847001224,\n\
\ \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6022413409108424,\n\
\ \"mc2_stderr\": 0.015142783614018333\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759095,\n\
\ \"acc_norm\": 0.6663822525597269,\n \"acc_norm_stderr\": 0.013778687054176536\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.627365066719777,\n\
\ \"acc_stderr\": 0.004825179407757565,\n \"acc_norm\": 0.8211511651065525,\n\
\ \"acc_norm_stderr\": 0.003824424844466082\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.028815615713432115,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.028815615713432115\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n\
\ \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n\
\ \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n\
\ \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.049598599663841815,\n\
\ \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.049598599663841815\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340355,\n\
\ \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340355\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n\
\ \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3783068783068783,\n \"acc_stderr\": 0.024976954053155254,\n \"\
acc_norm\": 0.3783068783068783,\n \"acc_norm_stderr\": 0.024976954053155254\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n\
\ \"acc_stderr\": 0.024892469172462826,\n \"acc_norm\": 0.7419354838709677,\n\
\ \"acc_norm_stderr\": 0.024892469172462826\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"\
acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709443,\n\
\ \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3111111111111111,\n \"acc_stderr\": 0.02822644674968352,\n \
\ \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.02822644674968352\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.03879687024073327,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.03879687024073327\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976054,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976054\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5833333333333334,\n \"acc_stderr\": 0.03362277436608043,\n \"\
acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.03362277436608043\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"\
acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7679324894514767,\n \"acc_stderr\": 0.027479744550808507,\n \
\ \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.027479744550808507\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7099236641221374,\n \"acc_stderr\": 0.03980066246467766,\n\
\ \"acc_norm\": 0.7099236641221374,\n \"acc_norm_stderr\": 0.03980066246467766\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.0384985609879409,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.0384985609879409\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.04330043749650741,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.04330043749650741\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n\
\ \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\
\ \"acc_stderr\": 0.022801382534597518,\n \"acc_norm\": 0.8589743589743589,\n\
\ \"acc_norm_stderr\": 0.022801382534597518\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7969348659003831,\n\
\ \"acc_stderr\": 0.014385525076611571,\n \"acc_norm\": 0.7969348659003831,\n\
\ \"acc_norm_stderr\": 0.014385525076611571\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879713,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879713\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3754189944134078,\n\
\ \"acc_stderr\": 0.01619510424846353,\n \"acc_norm\": 0.3754189944134078,\n\
\ \"acc_norm_stderr\": 0.01619510424846353\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.026336613469046626,\n\
\ \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.026336613469046626\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n\
\ \"acc_stderr\": 0.026311858071854155,\n \"acc_norm\": 0.6881028938906752,\n\
\ \"acc_norm_stderr\": 0.026311858071854155\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6882716049382716,\n \"acc_stderr\": 0.025773111169630457,\n\
\ \"acc_norm\": 0.6882716049382716,\n \"acc_norm_stderr\": 0.025773111169630457\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \
\ \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42242503259452413,\n\
\ \"acc_stderr\": 0.012615600475734921,\n \"acc_norm\": 0.42242503259452413,\n\
\ \"acc_norm_stderr\": 0.012615600475734921\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.028064998167040094,\n\
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.028064998167040094\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6290849673202614,\n \"acc_stderr\": 0.019542101564854125,\n \
\ \"acc_norm\": 0.6290849673202614,\n \"acc_norm_stderr\": 0.019542101564854125\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4394124847001224,\n\
\ \"mc1_stderr\": 0.01737452048251371,\n \"mc2\": 0.6022413409108424,\n\
\ \"mc2_stderr\": 0.015142783614018333\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7963693764798737,\n \"acc_stderr\": 0.011317798781626913\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.42153146322971946,\n \
\ \"acc_stderr\": 0.013601824409483272\n }\n}\n```"
repo_url: https://huggingface.co/NurtureAI/neural-chat-11b-v3-2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|arc:challenge|25_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|gsm8k|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hellaswag|10_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-21-45.753346.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-08T01-21-45.753346.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- '**/details_harness|winogrande|5_2023-12-08T01-21-45.753346.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-08T01-21-45.753346.parquet'
- config_name: results
data_files:
- split: 2023_12_08T01_21_45.753346
path:
- results_2023-12-08T01-21-45.753346.parquet
- split: latest
path:
- results_2023-12-08T01-21-45.753346.parquet
---
# Dataset Card for Evaluation run of NurtureAI/neural-chat-11b-v3-2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NurtureAI/neural-chat-11b-v3-2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [NurtureAI/neural-chat-11b-v3-2](https://huggingface.co/NurtureAI/neural-chat-11b-v3-2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-08T01:21:45.753346](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2/blob/main/results_2023-12-08T01-21-45.753346.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6233413317621571,
"acc_stderr": 0.0328873097315513,
"acc_norm": 0.6277933709363942,
"acc_norm_stderr": 0.0335567202427567,
"mc1": 0.4394124847001224,
"mc1_stderr": 0.01737452048251371,
"mc2": 0.6022413409108424,
"mc2_stderr": 0.015142783614018333
},
"harness|arc:challenge|25": {
"acc": 0.6262798634812287,
"acc_stderr": 0.014137708601759095,
"acc_norm": 0.6663822525597269,
"acc_norm_stderr": 0.013778687054176536
},
"harness|hellaswag|10": {
"acc": 0.627365066719777,
"acc_stderr": 0.004825179407757565,
"acc_norm": 0.8211511651065525,
"acc_norm_stderr": 0.003824424844466082
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.028815615713432115,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.028815615713432115
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.46078431372549017,
"acc_stderr": 0.049598599663841815,
"acc_norm": 0.46078431372549017,
"acc_norm_stderr": 0.049598599663841815
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3783068783068783,
"acc_stderr": 0.024976954053155254,
"acc_norm": 0.3783068783068783,
"acc_norm_stderr": 0.024976954053155254
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7419354838709677,
"acc_stderr": 0.024892469172462826,
"acc_norm": 0.7419354838709677,
"acc_norm_stderr": 0.024892469172462826
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709443,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3111111111111111,
"acc_stderr": 0.02822644674968352,
"acc_norm": 0.3111111111111111,
"acc_norm_stderr": 0.02822644674968352
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.03879687024073327,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.03879687024073327
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976054,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976054
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.03362277436608043,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.03362277436608043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.027479744550808507,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.027479744550808507
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7099236641221374,
"acc_stderr": 0.03980066246467766,
"acc_norm": 0.7099236641221374,
"acc_norm_stderr": 0.03980066246467766
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.0384985609879409,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.0384985609879409
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.04330043749650741,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.04330043749650741
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6993865030674846,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.6993865030674846,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597518,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597518
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7969348659003831,
"acc_stderr": 0.014385525076611571,
"acc_norm": 0.7969348659003831,
"acc_norm_stderr": 0.014385525076611571
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879713,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3754189944134078,
"acc_stderr": 0.01619510424846353,
"acc_norm": 0.3754189944134078,
"acc_norm_stderr": 0.01619510424846353
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.026336613469046626,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.026336613469046626
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6881028938906752,
"acc_stderr": 0.026311858071854155,
"acc_norm": 0.6881028938906752,
"acc_norm_stderr": 0.026311858071854155
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6882716049382716,
"acc_stderr": 0.025773111169630457,
"acc_norm": 0.6882716049382716,
"acc_norm_stderr": 0.025773111169630457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4432624113475177,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.4432624113475177,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42242503259452413,
"acc_stderr": 0.012615600475734921,
"acc_norm": 0.42242503259452413,
"acc_norm_stderr": 0.012615600475734921
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.028064998167040094,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.028064998167040094
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6290849673202614,
"acc_stderr": 0.019542101564854125,
"acc_norm": 0.6290849673202614,
"acc_norm_stderr": 0.019542101564854125
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4394124847001224,
"mc1_stderr": 0.01737452048251371,
"mc2": 0.6022413409108424,
"mc2_stderr": 0.015142783614018333
},
"harness|winogrande|5": {
"acc": 0.7963693764798737,
"acc_stderr": 0.011317798781626913
},
"harness|gsm8k|5": {
"acc": 0.42153146322971946,
"acc_stderr": 0.013601824409483272
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hamzakhaled/LMS_DS | ---
license: mit
---
|
andersonbcdefg/sft_language_submix_v2 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1798819313.9076667
num_examples: 1492027
download_size: 830793165
dataset_size: 1798819313.9076667
---
# Dataset Card for "sft_language_submix_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aliencen/gf | ---
license: openrail
---
|
Illia56/Military-Aircraft-Detection | ---
license: apache-2.0
task_categories:
- object-detection
- zero-shot-classification
- zero-shot-image-classification
- depth-estimation
- image-classification
- image-segmentation
tags:
- Image
- 'Computer Vision '
- Military
- Aviation
- Engineering
size_categories:
- 1M<n<10M
---
Dataset for object detection of military aircraft
bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax)
43 aircraft types
(A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23)
Please let me know if you find wrong labels or duplicated images. |
open-llm-leaderboard/details_Sharathhebbar24__Instruct_GPT_v1 | ---
pretty_name: Evaluation run of Sharathhebbar24/Instruct_GPT_v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Sharathhebbar24/Instruct_GPT_v1](https://huggingface.co/Sharathhebbar24/Instruct_GPT_v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sharathhebbar24__Instruct_GPT_v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T17:01:55.422442](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__Instruct_GPT_v1/blob/main/results_2024-02-09T17-01-55.422442.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2663539612944469,\n\
\ \"acc_stderr\": 0.03090555161671063,\n \"acc_norm\": 0.26787225113832785,\n\
\ \"acc_norm_stderr\": 0.03169005216444534,\n \"mc1\": 0.25703794369645044,\n\
\ \"mc1_stderr\": 0.01529807750948508,\n \"mc2\": 0.4222152001545833,\n\
\ \"mc2_stderr\": 0.014647547913363862\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2380546075085324,\n \"acc_stderr\": 0.012445770028026205,\n\
\ \"acc_norm\": 0.28071672354948807,\n \"acc_norm_stderr\": 0.01313123812697558\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.32732523401712804,\n\
\ \"acc_stderr\": 0.004682780790508342,\n \"acc_norm\": 0.3897629954192392,\n\
\ \"acc_norm_stderr\": 0.004866997110388195\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816503,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816503\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.23703703703703705,\n\
\ \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.23703703703703705,\n\
\ \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.2894736842105263,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.2894736842105263,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.21,\n\
\ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.30566037735849055,\n \"acc_stderr\": 0.028353298073322666,\n\
\ \"acc_norm\": 0.30566037735849055,\n \"acc_norm_stderr\": 0.028353298073322666\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n\
\ \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n\
\ \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.27167630057803466,\n\
\ \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838896,\n\
\ \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838896\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\": 0.23,\n\
\ \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.028504856470514185,\n\
\ \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.028504856470514185\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2206896551724138,\n \"acc_stderr\": 0.034559302019248124,\n\
\ \"acc_norm\": 0.2206896551724138,\n \"acc_norm_stderr\": 0.034559302019248124\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25132275132275134,\n \"acc_stderr\": 0.022340482339643895,\n \"\
acc_norm\": 0.25132275132275134,\n \"acc_norm_stderr\": 0.022340482339643895\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n\
\ \"acc_stderr\": 0.0404061017820884,\n \"acc_norm\": 0.2857142857142857,\n\
\ \"acc_norm_stderr\": 0.0404061017820884\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.3096774193548387,\n \"acc_stderr\": 0.026302774983517418,\n \"\
acc_norm\": 0.3096774193548387,\n \"acc_norm_stderr\": 0.026302774983517418\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.3054187192118227,\n \"acc_stderr\": 0.03240661565868408,\n \"\
acc_norm\": 0.3054187192118227,\n \"acc_norm_stderr\": 0.03240661565868408\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.17,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\"\
: 0.17,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.03287666758603488,\n\
\ \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.03287666758603488\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.36363636363636365,\n \"acc_stderr\": 0.034273086529999344,\n \"\
acc_norm\": 0.36363636363636365,\n \"acc_norm_stderr\": 0.034273086529999344\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.35751295336787564,\n \"acc_stderr\": 0.034588160421810045,\n\
\ \"acc_norm\": 0.35751295336787564,\n \"acc_norm_stderr\": 0.034588160421810045\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.34102564102564104,\n \"acc_stderr\": 0.024035489676335065,\n\
\ \"acc_norm\": 0.34102564102564104,\n \"acc_norm_stderr\": 0.024035489676335065\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.02730914058823019,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.02730914058823019\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3487394957983193,\n \"acc_stderr\": 0.03095663632856655,\n \
\ \"acc_norm\": 0.3487394957983193,\n \"acc_norm_stderr\": 0.03095663632856655\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.3486238532110092,\n \"acc_stderr\": 0.020431254090714328,\n \"\
acc_norm\": 0.3486238532110092,\n \"acc_norm_stderr\": 0.020431254090714328\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.24019607843137256,\n\
\ \"acc_stderr\": 0.02998373305591362,\n \"acc_norm\": 0.24019607843137256,\n\
\ \"acc_norm_stderr\": 0.02998373305591362\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.189873417721519,\n \"acc_stderr\": 0.025530100460233497,\n\
\ \"acc_norm\": 0.189873417721519,\n \"acc_norm_stderr\": 0.025530100460233497\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.10762331838565023,\n\
\ \"acc_stderr\": 0.020799400082880004,\n \"acc_norm\": 0.10762331838565023,\n\
\ \"acc_norm_stderr\": 0.020799400082880004\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2748091603053435,\n \"acc_stderr\": 0.039153454088478354,\n\
\ \"acc_norm\": 0.2748091603053435,\n \"acc_norm_stderr\": 0.039153454088478354\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.14049586776859505,\n \"acc_stderr\": 0.0317223342600216,\n \"\
acc_norm\": 0.14049586776859505,\n \"acc_norm_stderr\": 0.0317223342600216\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2037037037037037,\n\
\ \"acc_stderr\": 0.038935425188248475,\n \"acc_norm\": 0.2037037037037037,\n\
\ \"acc_norm_stderr\": 0.038935425188248475\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.18404907975460122,\n \"acc_stderr\": 0.03044677768797173,\n\
\ \"acc_norm\": 0.18404907975460122,\n \"acc_norm_stderr\": 0.03044677768797173\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.16964285714285715,\n\
\ \"acc_stderr\": 0.0356236785009539,\n \"acc_norm\": 0.16964285714285715,\n\
\ \"acc_norm_stderr\": 0.0356236785009539\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.34951456310679613,\n \"acc_stderr\": 0.04721188506097173,\n\
\ \"acc_norm\": 0.34951456310679613,\n \"acc_norm_stderr\": 0.04721188506097173\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19230769230769232,\n\
\ \"acc_stderr\": 0.025819233256483706,\n \"acc_norm\": 0.19230769230769232,\n\
\ \"acc_norm_stderr\": 0.025819233256483706\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.24648786717752236,\n\
\ \"acc_stderr\": 0.015411308769686934,\n \"acc_norm\": 0.24648786717752236,\n\
\ \"acc_norm_stderr\": 0.015411308769686934\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2514450867052023,\n \"acc_stderr\": 0.02335736578587403,\n\
\ \"acc_norm\": 0.2514450867052023,\n \"acc_norm_stderr\": 0.02335736578587403\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.28431372549019607,\n \"acc_stderr\": 0.025829163272757485,\n\
\ \"acc_norm\": 0.28431372549019607,\n \"acc_norm_stderr\": 0.025829163272757485\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24758842443729903,\n\
\ \"acc_stderr\": 0.024513879973621967,\n \"acc_norm\": 0.24758842443729903,\n\
\ \"acc_norm_stderr\": 0.024513879973621967\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.22839506172839505,\n \"acc_stderr\": 0.023358211840626267,\n\
\ \"acc_norm\": 0.22839506172839505,\n \"acc_norm_stderr\": 0.023358211840626267\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2553191489361702,\n \"acc_stderr\": 0.02601199293090201,\n \
\ \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.02601199293090201\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2561929595827901,\n\
\ \"acc_stderr\": 0.011149173153110582,\n \"acc_norm\": 0.2561929595827901,\n\
\ \"acc_norm_stderr\": 0.011149173153110582\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4522058823529412,\n \"acc_stderr\": 0.030233758551596452,\n\
\ \"acc_norm\": 0.4522058823529412,\n \"acc_norm_stderr\": 0.030233758551596452\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.21895424836601307,\n \"acc_stderr\": 0.016729937565537544,\n \
\ \"acc_norm\": 0.21895424836601307,\n \"acc_norm_stderr\": 0.016729937565537544\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n\
\ \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n\
\ \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.39183673469387753,\n \"acc_stderr\": 0.031251275910891656,\n\
\ \"acc_norm\": 0.39183673469387753,\n \"acc_norm_stderr\": 0.031251275910891656\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24875621890547264,\n\
\ \"acc_stderr\": 0.030567675938916707,\n \"acc_norm\": 0.24875621890547264,\n\
\ \"acc_norm_stderr\": 0.030567675938916707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.23493975903614459,\n\
\ \"acc_stderr\": 0.03300533186128922,\n \"acc_norm\": 0.23493975903614459,\n\
\ \"acc_norm_stderr\": 0.03300533186128922\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.27485380116959063,\n \"acc_stderr\": 0.03424042924691584,\n\
\ \"acc_norm\": 0.27485380116959063,\n \"acc_norm_stderr\": 0.03424042924691584\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25703794369645044,\n\
\ \"mc1_stderr\": 0.01529807750948508,\n \"mc2\": 0.4222152001545833,\n\
\ \"mc2_stderr\": 0.014647547913363862\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5406471981057617,\n \"acc_stderr\": 0.014005973823825138\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0075815011372251705,\n \
\ \"acc_stderr\": 0.0023892815120772136\n }\n}\n```"
repo_url: https://huggingface.co/Sharathhebbar24/Instruct_GPT_v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|arc:challenge|25_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|gsm8k|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hellaswag|10_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T17-01-55.422442.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T17-01-55.422442.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- '**/details_harness|winogrande|5_2024-02-09T17-01-55.422442.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T17-01-55.422442.parquet'
- config_name: results
data_files:
- split: 2024_02_09T17_01_55.422442
path:
- results_2024-02-09T17-01-55.422442.parquet
- split: latest
path:
- results_2024-02-09T17-01-55.422442.parquet
---
# Dataset Card for Evaluation run of Sharathhebbar24/Instruct_GPT_v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Sharathhebbar24/Instruct_GPT_v1](https://huggingface.co/Sharathhebbar24/Instruct_GPT_v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sharathhebbar24__Instruct_GPT_v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T17:01:55.422442](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__Instruct_GPT_v1/blob/main/results_2024-02-09T17-01-55.422442.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2663539612944469,
"acc_stderr": 0.03090555161671063,
"acc_norm": 0.26787225113832785,
"acc_norm_stderr": 0.03169005216444534,
"mc1": 0.25703794369645044,
"mc1_stderr": 0.01529807750948508,
"mc2": 0.4222152001545833,
"mc2_stderr": 0.014647547913363862
},
"harness|arc:challenge|25": {
"acc": 0.2380546075085324,
"acc_stderr": 0.012445770028026205,
"acc_norm": 0.28071672354948807,
"acc_norm_stderr": 0.01313123812697558
},
"harness|hellaswag|10": {
"acc": 0.32732523401712804,
"acc_stderr": 0.004682780790508342,
"acc_norm": 0.3897629954192392,
"acc_norm_stderr": 0.004866997110388195
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816503,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816503
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.30566037735849055,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.30566037735849055,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838896,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838896
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165044,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165044
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2553191489361702,
"acc_stderr": 0.028504856470514185,
"acc_norm": 0.2553191489361702,
"acc_norm_stderr": 0.028504856470514185
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2206896551724138,
"acc_stderr": 0.034559302019248124,
"acc_norm": 0.2206896551724138,
"acc_norm_stderr": 0.034559302019248124
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25132275132275134,
"acc_stderr": 0.022340482339643895,
"acc_norm": 0.25132275132275134,
"acc_norm_stderr": 0.022340482339643895
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.0404061017820884,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.0404061017820884
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3096774193548387,
"acc_stderr": 0.026302774983517418,
"acc_norm": 0.3096774193548387,
"acc_norm_stderr": 0.026302774983517418
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3054187192118227,
"acc_stderr": 0.03240661565868408,
"acc_norm": 0.3054187192118227,
"acc_norm_stderr": 0.03240661565868408
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.17,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.17,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.23030303030303031,
"acc_stderr": 0.03287666758603488,
"acc_norm": 0.23030303030303031,
"acc_norm_stderr": 0.03287666758603488
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.36363636363636365,
"acc_stderr": 0.034273086529999344,
"acc_norm": 0.36363636363636365,
"acc_norm_stderr": 0.034273086529999344
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.35751295336787564,
"acc_stderr": 0.034588160421810045,
"acc_norm": 0.35751295336787564,
"acc_norm_stderr": 0.034588160421810045
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.34102564102564104,
"acc_stderr": 0.024035489676335065,
"acc_norm": 0.34102564102564104,
"acc_norm_stderr": 0.024035489676335065
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.02730914058823019,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.02730914058823019
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3487394957983193,
"acc_stderr": 0.03095663632856655,
"acc_norm": 0.3487394957983193,
"acc_norm_stderr": 0.03095663632856655
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3486238532110092,
"acc_stderr": 0.020431254090714328,
"acc_norm": 0.3486238532110092,
"acc_norm_stderr": 0.020431254090714328
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.24019607843137256,
"acc_stderr": 0.02998373305591362,
"acc_norm": 0.24019607843137256,
"acc_norm_stderr": 0.02998373305591362
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.189873417721519,
"acc_stderr": 0.025530100460233497,
"acc_norm": 0.189873417721519,
"acc_norm_stderr": 0.025530100460233497
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.10762331838565023,
"acc_stderr": 0.020799400082880004,
"acc_norm": 0.10762331838565023,
"acc_norm_stderr": 0.020799400082880004
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2748091603053435,
"acc_stderr": 0.039153454088478354,
"acc_norm": 0.2748091603053435,
"acc_norm_stderr": 0.039153454088478354
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.14049586776859505,
"acc_stderr": 0.0317223342600216,
"acc_norm": 0.14049586776859505,
"acc_norm_stderr": 0.0317223342600216
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2037037037037037,
"acc_stderr": 0.038935425188248475,
"acc_norm": 0.2037037037037037,
"acc_norm_stderr": 0.038935425188248475
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.18404907975460122,
"acc_stderr": 0.03044677768797173,
"acc_norm": 0.18404907975460122,
"acc_norm_stderr": 0.03044677768797173
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.16964285714285715,
"acc_stderr": 0.0356236785009539,
"acc_norm": 0.16964285714285715,
"acc_norm_stderr": 0.0356236785009539
},
"harness|hendrycksTest-management|5": {
"acc": 0.34951456310679613,
"acc_stderr": 0.04721188506097173,
"acc_norm": 0.34951456310679613,
"acc_norm_stderr": 0.04721188506097173
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19230769230769232,
"acc_stderr": 0.025819233256483706,
"acc_norm": 0.19230769230769232,
"acc_norm_stderr": 0.025819233256483706
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.24648786717752236,
"acc_stderr": 0.015411308769686934,
"acc_norm": 0.24648786717752236,
"acc_norm_stderr": 0.015411308769686934
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2514450867052023,
"acc_stderr": 0.02335736578587403,
"acc_norm": 0.2514450867052023,
"acc_norm_stderr": 0.02335736578587403
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.28431372549019607,
"acc_stderr": 0.025829163272757485,
"acc_norm": 0.28431372549019607,
"acc_norm_stderr": 0.025829163272757485
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24758842443729903,
"acc_stderr": 0.024513879973621967,
"acc_norm": 0.24758842443729903,
"acc_norm_stderr": 0.024513879973621967
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.22839506172839505,
"acc_stderr": 0.023358211840626267,
"acc_norm": 0.22839506172839505,
"acc_norm_stderr": 0.023358211840626267
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2553191489361702,
"acc_stderr": 0.02601199293090201,
"acc_norm": 0.2553191489361702,
"acc_norm_stderr": 0.02601199293090201
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2561929595827901,
"acc_stderr": 0.011149173153110582,
"acc_norm": 0.2561929595827901,
"acc_norm_stderr": 0.011149173153110582
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4522058823529412,
"acc_stderr": 0.030233758551596452,
"acc_norm": 0.4522058823529412,
"acc_norm_stderr": 0.030233758551596452
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.21895424836601307,
"acc_stderr": 0.016729937565537544,
"acc_norm": 0.21895424836601307,
"acc_norm_stderr": 0.016729937565537544
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.39183673469387753,
"acc_stderr": 0.031251275910891656,
"acc_norm": 0.39183673469387753,
"acc_norm_stderr": 0.031251275910891656
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24875621890547264,
"acc_stderr": 0.030567675938916707,
"acc_norm": 0.24875621890547264,
"acc_norm_stderr": 0.030567675938916707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.23493975903614459,
"acc_stderr": 0.03300533186128922,
"acc_norm": 0.23493975903614459,
"acc_norm_stderr": 0.03300533186128922
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.27485380116959063,
"acc_stderr": 0.03424042924691584,
"acc_norm": 0.27485380116959063,
"acc_norm_stderr": 0.03424042924691584
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25703794369645044,
"mc1_stderr": 0.01529807750948508,
"mc2": 0.4222152001545833,
"mc2_stderr": 0.014647547913363862
},
"harness|winogrande|5": {
"acc": 0.5406471981057617,
"acc_stderr": 0.014005973823825138
},
"harness|gsm8k|5": {
"acc": 0.0075815011372251705,
"acc_stderr": 0.0023892815120772136
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
liuyanchen1015/MULTI_VALUE_rte_his_he | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 179677
num_examples: 403
- name: train
num_bytes: 133331
num_examples: 301
download_size: 213915
dataset_size: 313008
---
# Dataset Card for "MULTI_VALUE_rte_his_he"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ajibawa-2023/Education-Young-Children | ---
license: apache-2.0
language:
- en
tags:
- Education
- Young Children
- Children
- Knowledge
---
Details coming soon!! |
Nexdata/Indonesian_Conversational_Speech_Data_by_Mobile_Phone | ---
language:
- id
task_categories:
- conversational
- automatic-speech-recognition
---
---
# Dataset Card for Nexdata/Indonesian_Conversational_Speech_Data_by_Mobile_Phone
## Description
The 300 Hours - Indonesian conversational speech data collected by phone involved about 300 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 16kHz, 16bit, uncompressed WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1239?source=Huggingface
# Specifications
## Format
16kHz 16bit, uncompressed wav, mono channel;
## Environment
quiet indoor environment, without echo;
## Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Demographics
About 300 speakers totally
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Android mobile phone, iPhone;
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
the word accuracy rate is not less than 98%
# Licensing Information
Commercial License |
oscar-corpus/oscar-2301-hpc | ---
license: cc0-1.0
size_categories:
- n>1T
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
extra_gated_prompt: "By filling the form below, you understand that only the metadata and the annotations of OSCAR 23.01 have a cc0-1.0 license, and that the rest of the content is crawled data derived from the November/December 2022 snapshot of Common Crawl, for which the authors of OSCAR **do not** hold any copyright whatsoever."
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
Usecase: text
I have explicitly check with my jurisdiction and I confirm that downloading OSCAR 2301 is legal in the country/region where I am located right now, and for the use case that I have described above: checkbox
---
# Dataset Card for "oscar-2301-hpc"
## IMPORTANT NOTE: This dataset is intended to be downloaded as a snapshot and used directly on an HPC where keeping the number of inodes low is important. The files here are specifically designed to be very large, if you're looking for more managable file sizes please use the standard distribution of OSCAR instead: https://huggingface.co/datasets/oscar-corpus/OSCAR-2301
## IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-project.org](https://oscar-project.org)
- **Repository:** [https://github.com/oscar-project](https://github.com/oscar-project)
- **Papers:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://aclanthology.org/2022.lrec-1.463/), [Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data](https://arxiv.org/abs/2212.10440)
- **Point of Contact:** [Contact](https://oscar-project.org/#contact)
### Dataset Summary
The OSCAR project (**O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed [high-performance data pipelines](https://github.com/oscar-corpus/ungoliant) specifically conceived to classify and filter large amounts of [web data](https://commoncrawl.org/). The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible.
OSCAR 23.01 is the January 2023 version of the OSCAR Corpus based on the [November/December 2022 dump of Common Crawl](https://commoncrawl.org/2022/12/nov-dec-2022-crawl-archive-now-available/). While being quite similar to OSCAR 22.01, it contains several new features, including [KenLM](https://kheafield.com/code/kenlm/)-based adult content detection, precomputed [Locality-Sensitive Hashes](https://fr.wikipedia.org/wiki/Locality_sensitive_hashing) for near deduplication, and [blocklist](https://dsi.ut-capitole.fr/blacklists/index_en.php)-based categories. OSCAR 23.01 has also moved from gzip to [Zstandard compression](https://facebook.github.io/zstd/). You might already have `zstd` installed on your system, but if not, please check the [Zstandard website](https://facebook.github.io/zstd/) for installation instructions.
### Supported Tasks and Leaderboards
OSCAR is mainly intended to pretrain language models and word representations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
### Issues
OSCAR 23.01 may have quality issues on low size subcorpora, as it has been the case before.
Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus.
As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic.
**If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.**
|Language code|Language|Issues|
|-------------|--------|------|
| | | |
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
TODO
### Layout
```js
{
"content":"English sentence\nphrase en français\n????????????", // (1)
"warc_headers":{ // (2)
"warc-identified-content-language":"fra,eng",
"warc-target-uri":"https://fr.wikipedia.org/wiki/...",
"warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>",
"warc-type":"conversion",
"content-length":"35298", // (3)
"warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>",
"warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3)
"warc-date":"2022-11-26T09:45:47Z",
"content-type":"text/plain"
},
"metadata":{
"identification":{ // (4)
"label":"fr",
"prob":0.8938327
},
"harmful_pp":4063.1814, // (5)
"tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6)
"quality_warnings":[ // (7)
"short_sentences",
"header",
"footer"
],
"categories":[ // (8)
"examen_pix",
"liste_bu"
],
"sentence_identifications":[ // (9)
{
"label":"fr",
"prob":0.99837273
},
{
"label":"en",
"prob":0.9992377
},
null
]
}
}
```
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
</details>
## Table
| | Code | Language | # docs | # words | Content Length : |
|----:|:-------|:-------------------------|:--------------|:----------------|:-----------------|
| 0 | af | Afrikaans | 23,994 | 6,217,024 | 37.2 MB |
| 1 | sq | Albanian | 1,342,790 | 462,694,599 | 3.2 GB |
| 2 | am | Amharic | 119,434 | 40,262,809 | 512.9 MB |
| 3 | ar | Arabic | 25,012,116 | 10,081,452,882 | 110.7 GB |
| 4 | an | Aragonese | 34 | 264 | 11.0 kB |
| 5 | hy | Armenian | 1,056,974 | 336,045,041 | 4.9 GB |
| 6 | as | Assamese | 89,542 | 24,395,215 | 412.1 MB |
| 7 | ast | Asturian | 440 | 10,917 | 74.1 kB |
| 8 | av | Avaric | 44 | 1,073 | 18.6 kB |
| 9 | az | Azerbaijani | 1,159,994 | 316,850,330 | 3.0 GB |
| 10 | bn | Bangla | 3,474,086 | 1,092,983,765 | 19.1 GB |
| 11 | ba | Bashkir | 128,248 | 26,036,637 | 363.7 MB |
| 12 | eu | Basque | 678,474 | 136,672,615 | 1.2 GB |
| 13 | be | Belarusian | 445,612 | 164,729,607 | 2.3 GB |
| 14 | bh | Bihari languages | 48 | 507 | 6.8 kB |
| 15 | bpy | Bishnupriya | 2,346 | 346,947 | 5.4 MB |
| 16 | bs | Bosnian | 20 | 395 | 3.0 kB |
| 17 | br | Breton | 36,338 | 4,759,407 | 31.4 MB |
| 18 | bg | Bulgarian | 8,933,998 | 3,635,273,738 | 44.1 GB |
| 19 | my | Burmese | 430,276 | 82,433,836 | 3.0 GB |
| 20 | ca | Catalan | 6,953,898 | 2,240,460,836 | 15.3 GB |
| 21 | ceb | Cebuano | 16,174 | 6,263,404 | 41.1 MB |
| 22 | ckb | Central Kurdish | 182,508 | 61,334,746 | 772.9 MB |
| 23 | ce | Chechen | 11,686 | 1,051,752 | 13.9 MB |
| 24 | zh | Chinese | 138,478,270 | 44,378,380,161 | 1.4 TB |
| 25 | cv | Chuvash | 16,652 | 3,039,925 | 42.3 MB |
| 26 | kw | Cornish | 8 | 80 | 432 Bytes |
| 27 | hr | Croatian | 31,808 | 3,542,961 | 26.5 MB |
| 28 | cs | Czech | 34,859,632 | 9,717,378,559 | 77.0 GB |
| 29 | da | Danish | 7,214,338 | 2,217,634,340 | 14.8 GB |
| 30 | dv | Divehi | 77,060 | 10,655,359 | 200.1 MB |
| 31 | nl | Dutch | 72,552,688 | 19,564,553,306 | 135.0 GB |
| 32 | mhr | Eastern Mari | 9,502 | 1,615,215 | 22.9 MB |
| 33 | arz | Egyptian Arabic | 3,958 | 385,511 | 3.7 MB |
| 34 | en | English | 1,235,510,986 | 523,869,288,690 | 3.4 TB |
| 35 | eo | Esperanto | 226,924 | 67,774,923 | 474.8 MB |
| 36 | et | Estonian | 3,601,904 | 938,296,892 | 8.0 GB |
| 37 | tl | Filipino | 250,558 | 110,560,444 | 719.2 MB |
| 38 | fi | Finnish | 14,471,710 | 4,198,143,883 | 41.1 GB |
| 39 | fr | French | 158,334,998 | 62,127,088,294 | 430.5 GB |
| 40 | gl | Galician | 248,762 | 38,345,625 | 255.7 MB |
| 41 | ka | Georgian | 1,343,036 | 373,935,158 | 8.4 GB |
| 42 | de | German | 206,598,430 | 73,848,586,648 | 594.7 GB |
| 43 | gom | Goan Konkani | 398 | 121,035 | 2.3 MB |
| 44 | el | Greek | 20,282,864 | 7,691,622,692 | 95.7 GB |
| 45 | gn | Guarani | 14 | 260 | 2.2 kB |
| 46 | gu | Gujarati | 425,552 | 417,001,705 | 5.6 GB |
| 47 | ht | Haitian Creole | 2 | 20,671 | 93.1 kB |
| 48 | he | Hebrew | 3,997,888 | 1,697,158,891 | 18.0 GB |
| 49 | hi | Hindi | 5,514,454 | 2,475,605,444 | 32.6 GB |
| 50 | hu | Hungarian | 21,349,372 | 16,013,364,289 | 150.1 GB |
| 51 | is | Icelandic | 1,210,232 | 294,471,539 | 2.2 GB |
| 52 | io | Ido | 224 | 2,598 | 16.1 kB |
| 53 | ilo | Iloko | 144 | 4,411 | 28.0 kB |
| 54 | id | Indonesian | 7,109,778 | 3,228,020,221 | 23.4 GB |
| 55 | ia | Interlingua | 34 | 9,384 | 33.5 kB |
| 56 | ie | Interlingue | 2 | 0 | 881 Bytes |
| 57 | ga | Irish | 29,894 | 9,054,923 | 63.2 MB |
| 58 | it | Italian | 89,021,606 | 36,327,274,203 | 259.4 GB |
| 59 | ja | Japanese | 94,236,404 | 4,401,059,165 | 181.2 GB |
| 60 | jv | Javanese | 172 | 3,286 | 25.7 kB |
| 61 | xal | Kalmyk | 2 | 27 | 315 Bytes |
| 62 | kn | Kannada | 448,500 | 124,924,350 | 2.6 GB |
| 63 | krc | Karachay-Balkar | 496 | 8,385 | 122.4 kB |
| 64 | kk | Kazakh | 677,622 | 214,679,857 | 3.3 GB |
| 65 | km | Khmer | 450,660 | 59,880,231 | 3.2 GB |
| 66 | kv | Komi | 460 | 5,909 | 70.3 kB |
| 67 | ko | Korean | 15,147,698 | 3,435,866,935 | 38.1 GB |
| 68 | ku | Kurdish | 80,338 | 25,921,607 | 174.1 MB |
| 69 | ky | Kyrgyz | 144,288 | 32,062,783 | 489.3 MB |
| 70 | lo | Lao | 118,374 | 10,659,203 | 472.1 MB |
| 71 | la | Latin | 14,384 | 307,865 | 2.0 MB |
| 72 | lv | Latvian | 2,435,882 | 845,459,899 | 7.4 GB |
| 73 | lez | Lezghian | 676 | 60,634 | 856.6 kB |
| 74 | li | Limburgish | 6 | 169 | 1.4 kB |
| 75 | lt | Lithuanian | 5,182,028 | 1,674,362,574 | 14.5 GB |
| 76 | jbo | Lojban | 572 | 312,315 | 1.5 MB |
| 77 | lmo | Lombard | 112 | 3,269 | 21.0 kB |
| 78 | nds | Low German | 5,248 | 1,612,175 | 10.7 MB |
| 79 | dsb | Lower Sorbian | 8 | 84 | 664 Bytes |
| 80 | lb | Luxembourgish | 18,090 | 2,514,838 | 18.4 MB |
| 81 | mk | Macedonian | 1,063,298 | 389,344,425 | 4.7 GB |
| 82 | mai | Maithili | 46 | 467 | 6.8 kB |
| 83 | mg | Malagasy | 10,830 | 1,416,430 | 11.2 MB |
| 84 | ms | Malay | 11,500 | 238,477 | 2.6 MB |
| 85 | ml | Malayalam | 800,936 | 236,597,838 | 5.8 GB |
| 86 | mt | Maltese | 5,180 | 149,886 | 1.3 MB |
| 87 | mr | Marathi | 729,578 | 252,706,331 | 4.5 GB |
| 88 | mzn | Mazanderani | 384 | 16,115 | 169.2 kB |
| 89 | min | Minangkabau | 2,436 | 305,589 | 3.8 MB |
| 90 | xmf | Mingrelian | 7,318 | 283,316 | 6.1 MB |
| 91 | mwl | Mirandese | 4 | 54 | 423 Bytes |
| 92 | mn | Mongolian | 1,061,710 | 454,350,415 | 5.8 GB |
| 93 | multi | **Multilingual** | 2,948,202 | 1,251,676,406 | 11.9 GB |
| 94 | nah | Nahuatl languages | 38 | 279 | 2.4 kB |
| 95 | ne | Nepali | 1,152,156 | 278,901,036 | 4.9 GB |
| 96 | new | Newari | 1,996 | 229,703 | 4.0 MB |
| 97 | no | Norwegian | 2,797,378 | 373,160,033 | 2.6 GB |
| 98 | nn | Norwegian Nynorsk | 19,470 | 575,518 | 3.7 MB |
| 99 | oc | Occitan | 920 | 34,701 | 405.0 kB |
| 100 | or | Odia | 158,426 | 31,963,340 | 543.1 MB |
| 101 | os | Ossetic | 8,628 | 3,935,964 | 50.7 MB |
| 102 | ps | Pashto | 87,408 | 30,196,179 | 261.6 MB |
| 103 | fa | Persian | 23,813,882 | 9,609,206,698 | 93.2 GB |
| 104 | pms | Piedmontese | 2,524 | 510,087 | 3.1 MB |
| 105 | pl | Polish | 57,184,826 | 18,073,705,588 | 147.1 GB |
| 106 | pt | Portuguese | 36,062,800 | 15,172,557,311 | 105.0 GB |
| 107 | pa | Punjabi | 222,058 | 104,235,418 | 1.4 GB |
| 108 | qu | Quechua | 2 | 13 | 143 Bytes |
| 109 | ro | Romanian | 11,985,668 | 6,302,600,833 | 45.6 GB |
| 110 | bxr | Russia Buriat | 72 | 698 | 8.2 kB |
| 111 | ru | Russian | 194,143,422 | 78,032,029,344 | 1.1 TB |
| 112 | sah | Sakha | 17,566 | 4,288,051 | 68.8 MB |
| 113 | sa | Sanskrit | 16,802 | 2,479,345 | 56.3 MB |
| 114 | gd | Scottish Gaelic | 776 | 18,458 | 146.1 kB |
| 115 | sr | Serbian | 1,677,896 | 632,781,822 | 7.7 GB |
| 116 | sh | Serbian (Latin) | 3,214 | 166,517 | 816.4 kB |
| 117 | sd | Sindhi | 48,566 | 14,667,207 | 131.6 MB |
| 118 | si | Sinhala | 301,066 | 172,755,385 | 2.6 GB |
| 119 | sk | Slovak | 8,931,784 | 2,704,716,280 | 21.5 GB |
| 120 | sl | Slovenian | 1,112,560 | 192,816,743 | 1.4 GB |
| 121 | so | Somali | 6 | 51 | 503 Bytes |
| 122 | azb | South Azerbaijani | 26,364 | 2,029,729 | 28.4 MB |
| 123 | es | Spanish | 153,574,556 | 63,388,237,965 | 429.9 GB |
| 124 | su | Sundanese | 18 | 258 | 2.0 kB |
| 125 | sw | Swahili | 1,664 | 164,459 | 1.0 MB |
| 126 | sv | Swedish | 21,891,348 | 6,993,719,601 | 50.0 GB |
| 127 | gsw | Swiss German | 342 | 34,328 | 232.7 kB |
| 128 | tg | Tajik | 144,932 | 76,987,285 | 1.0 GB |
| 129 | ta | Tamil | 1,638,238 | 738,824,392 | 15.8 GB |
| 130 | tt | Tatar | 262,654 | 59,253,765 | 833.8 MB |
| 131 | te | Telugu | 644,712 | 201,575,815 | 3.9 GB |
| 132 | th | Thai | 14,845,900 | 2,224,483,018 | 92.0 GB |
| 133 | bo | Tibetan | 62,352 | 6,062,558 | 531.6 MB |
| 134 | tr | Turkish | 26,654,330 | 8,290,890,087 | 73.7 GB |
| 135 | tk | Turkmen | 4,576 | 325,786 | 3.3 MB |
| 136 | uk | Ukrainian | 10,059,992 | 3,183,842,018 | 44.7 GB |
| 137 | x-eml | Emiliano-Romagnol | 4 | 329 | 1.8 kB |
| 138 | hsb | Upper Sorbian | 402 | 15,827 | 123.2 kB |
| 139 | ur | Urdu | 887,004 | 434,023,273 | 3.8 GB |
| 140 | ug | Uyghur | 51,304 | 14,659,554 | 219.8 MB |
| 141 | uz | Uzbek | 15,806 | 1,665,960 | 15.3 MB |
| 142 | vi | Vietnamese | 33,933,994 | 22,424,984,210 | 140.8 GB |
| 143 | vo | Volapük | 896 | 49,968 | 371.9 kB |
| 144 | wa | Walloon | 390 | 6,347 | 34.3 kB |
| 145 | war | Waray | 1,494 | 19,665 | 126.8 kB |
| 146 | cy | Welsh | 151,512 | 52,250,043 | 333.0 MB |
| 147 | fy | Western Frisian | 45,458 | 9,885,788 | 70.4 MB |
| 148 | mrj | Western Mari | 496 | 60,180 | 765.8 kB |
| 149 | pnb | Western Panjabi | 12,904 | 11,844,695 | 105.8 MB |
| 150 | wuu | Wu Chinese | 136 | 1,199 | 26.8 kB |
| 151 | yi | Yiddish | 47,438 | 14,287,370 | 171.7 MB |
| 152 | yo | Yoruba | 128 | 2,396 | 16.6 kB |
## Dataset Creation
### Curation Rationale
OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText).
The pipeline works on documents rather than lines.
`Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy.
Threading is done at shard, record and sentence level, making the whole generation process much more efficient.
Filtering will be explained in a future blog post at our [website](https://oscar-corpus.com)
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR 22.01, the **November/December 2021** snapshot was used. It is composed by 64 000 compressed text files containing documents and their headers.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
This release of OSCAR was made possible by [Julien Abadji](https://ujj.space), [Pedro Ortiz Suarez](https://portizs.eu/), [Rua Ismail](https://oscar-project.org/authors/rua/), [Sotaro Takeshita](https://sotaro.io/about), [Sebastian Nagel](https://www.polver.uni-konstanz.de/cnc/people/nagel/) and [Benoit Sagot](http://pauillac.inria.fr/~sagot/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, the OSCAR project, Inria, the Univertity of Mannheim and DFKI GmbH have waived all copyright and related or neighboring rights to OSCAR
This work is published from: France and Germany.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv221210440J,
author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro},
title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = dec,
eid = {arXiv:2212.10440},
pages = {arXiv:2212.10440},
doi = {10.48550/arXiv.2212.10440},
archivePrefix = {arXiv},
eprint = {2212.10440},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{abadji-etal-2022-towards,
title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus",
author = "Abadji, Julien and
Ortiz Suarez, Pedro and
Romary, Laurent and
Sagot, Beno{\^\i}t",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.463",
pages = "4344--4355",
abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.",
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@article{kreutzer-etal-2022-quality,
title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets",
author = {Kreutzer, Julia and
Caswell, Isaac and
Wang, Lisa and
Wahab, Ahsan and
van Esch, Daan and
Ulzii-Orshikh, Nasanbayar and
Tapo, Allahsera and
Subramani, Nishant and
Sokolov, Artem and
Sikasote, Claytone and
Setyawan, Monang and
Sarin, Supheakmungkol and
Samb, Sokhar and
Sagot, Beno{\^\i}t and
Rivera, Clara and
Rios, Annette and
Papadimitriou, Isabel and
Osei, Salomey and
Suarez, Pedro Ortiz and
Orife, Iroro and
Ogueji, Kelechi and
Rubungo, Andre Niyongabo and
Nguyen, Toan Q. and
M{\"u}ller, Mathias and
M{\"u}ller, Andr{\'e} and
Muhammad, Shamsuddeen Hassan and
Muhammad, Nanda and
Mnyakeni, Ayanda and
Mirzakhalov, Jamshidbek and
Matangira, Tapiwanashe and
Leong, Colin and
Lawson, Nze and
Kudugunta, Sneha and
Jernite, Yacine and
Jenny, Mathias and
Firat, Orhan and
Dossou, Bonaventure F. P. and
Dlamini, Sakhile and
de Silva, Nisansa and
{\c{C}}abuk Ball{\i}, Sakine and
Biderman, Stella and
Battisti, Alessia and
Baruwa, Ahmed and
Bapna, Ankur and
Baljekar, Pallavi and
Azime, Israel Abebe and
Awokoya, Ayodele and
Ataman, Duygu and
Ahia, Orevaoghene and
Ahia, Oghenefego and
Agrawal, Sweta and
Adeyemi, Mofetoluwa},
journal = "Transactions of the Association for Computational Linguistics",
volume = "10",
year = "2022",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2022.tacl-1.4",
doi = "10.1162/tacl_a_00447",
pages = "50--72",
abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.",
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
|
IsraelJordan1/ai-tube-flags | ---
license: cc-by-nc-4.0
pretty_name: FlagsWorld
---
## Description
Footage of all flags of the world!
## Model
SVD
## Tags
- Flags
## Voice
Julian
## Music
balearic deep house music
## Prompt
A video channel about flags
|
joey234/mmlu-international_law-dev | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: dev
num_bytes: 3638
num_examples: 5
download_size: 7457
dataset_size: 3638
---
# Dataset Card for "mmlu-international_law-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NotSharpe/MidwxstRVC | ---
license: openrail
---
|
argilla/distilabel-intel-orca-kto | ---
language:
- en
license: apache-2.0
dataset_info:
features:
- name: system
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
- name: rating
dtype: float64
- name: status
dtype: string
- name: in_gsm8k_train
dtype: bool
splits:
- name: train
num_bytes: 41940501
num_examples: 23147
download_size: 17134184
dataset_size: 41940501
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- rlaif
- kto
- rlhf
- distilabel
- synthetic
---
<p align="right">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# distilabel Orca Pairs for KTO
> A KTO signal transformed version of the highly loved [distilabel Orca Pairs for DPO](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs).
The dataset is a "distilabeled" version of the widely used dataset: [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The original dataset has been used by 100s of open-source practitioners and models. We knew from fixing UltraFeedback (and before that, Alpacas and Dollys) that this dataset could be highly improved.
Continuing with our mission to build the best alignment datasets for open-source LLMs and the community, we spent a few hours improving it with [distilabel](https://github.com/argilla-io/distilabel).
This was our main intuition: the original dataset just assumes gpt4/3.5-turbo are always the best response. We know from UltraFeedback that's not always the case. Moreover, DPO fine-tuning benefits from the diversity of preference pairs.
Additionally, we have added a new column indicating whether the question in the dataset is part of the train set of gsm8k (there were no examples from the test set). See the reproduction section for more details.
## Why KTO?
The [KTO paper](https://arxiv.org/abs/2402.01306) states:
- KTO matches or exceeds DPO performance at scales from 1B to 30B parameters.1 That is, taking a preference dataset of n DPO pairs and breaking it up into 2n examples for KTO can yield better generations, despite the model ostensibly learning from a weaker signal.
- KTO can handle extreme data imbalances, matching DPO performance while using up to 90% fewer desirable examples (i.e., examples of good generations). Its success thus cannot be ascribed to the alignment data being sourced from a preference dataset.
- When the pretrained model is sufficiently good, one can skip supervised finetuning and go straight to KTO without a loss in generation quality. In contrast, we find that without doing SFT first, DPO-aligned models are significantly worse at all scales.
## Reproduce KTO Transformation
Orginal [distilabel intel orca dpo pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs)
<a target="_blank" href="https://colab.research.google.com/drive/1Xe7onnFQscpiDQ4RzaiWlSx5QPYmqDtd?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
|
open-llm-leaderboard/details_DopeorNope__SOLARC-M-10.7B | ---
pretty_name: Evaluation run of DopeorNope/SOLARC-M-10.7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DopeorNope__SOLARC-M-10.7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T12:32:38.431063](https://huggingface.co/datasets/open-llm-leaderboard/details_DopeorNope__SOLARC-M-10.7B/blob/main/results_2024-01-04T12-32-38.431063.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6669558592994752,\n\
\ \"acc_stderr\": 0.03159525026454693,\n \"acc_norm\": 0.667691393491765,\n\
\ \"acc_norm_stderr\": 0.03223875437522202,\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7185061667944077,\n\
\ \"mc2_stderr\": 0.015014851042298718\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.01357265770308495,\n\
\ \"acc_norm\": 0.71160409556314,\n \"acc_norm_stderr\": 0.013238394422428173\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7133041226847242,\n\
\ \"acc_stderr\": 0.004512940497462742,\n \"acc_norm\": 0.8840868352917746,\n\
\ \"acc_norm_stderr\": 0.003194665266078602\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.756578947368421,\n \"acc_stderr\": 0.034923496688842384,\n\
\ \"acc_norm\": 0.756578947368421,\n \"acc_norm_stderr\": 0.034923496688842384\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n\
\ \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n\
\ \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n\
\ \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n\
\ \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947558,\n\
\ \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947558\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"\
acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n\
\ \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n\
\ \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n\
\ \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\"\
: 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8121212121212121,\n \"acc_stderr\": 0.03050193405942914,\n\
\ \"acc_norm\": 0.8121212121212121,\n \"acc_norm_stderr\": 0.03050193405942914\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"\
acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n\
\ \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"\
acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"\
acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \
\ \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n\
\ \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8033205619412516,\n\
\ \"acc_stderr\": 0.014214138556913917,\n \"acc_norm\": 0.8033205619412516,\n\
\ \"acc_norm_stderr\": 0.014214138556913917\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.023083658586984204,\n\
\ \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.023083658586984204\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39329608938547483,\n\
\ \"acc_stderr\": 0.016337268694270105,\n \"acc_norm\": 0.39329608938547483,\n\
\ \"acc_norm_stderr\": 0.016337268694270105\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n\
\ \"acc_stderr\": 0.02521804037341062,\n \"acc_norm\": 0.729903536977492,\n\
\ \"acc_norm_stderr\": 0.02521804037341062\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0227797190887334,\n\
\ \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0227797190887334\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4934810951760104,\n\
\ \"acc_stderr\": 0.012769150688867503,\n \"acc_norm\": 0.4934810951760104,\n\
\ \"acc_norm_stderr\": 0.012769150688867503\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.026679252270103128,\n\
\ \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.026679252270103128\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \
\ \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338733,\n\
\ \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338733\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n\
\ \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7185061667944077,\n\
\ \"mc2_stderr\": 0.015014851042298718\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781091\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6542835481425322,\n \
\ \"acc_stderr\": 0.013100422990441573\n }\n}\n```"
repo_url: https://huggingface.co/DopeorNope/SOLARC-M-10.7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-32-38.431063.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T12-32-38.431063.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- '**/details_harness|winogrande|5_2024-01-04T12-32-38.431063.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T12-32-38.431063.parquet'
- config_name: results
data_files:
- split: 2024_01_04T12_32_38.431063
path:
- results_2024-01-04T12-32-38.431063.parquet
- split: latest
path:
- results_2024-01-04T12-32-38.431063.parquet
---
# Dataset Card for Evaluation run of DopeorNope/SOLARC-M-10.7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_DopeorNope__SOLARC-M-10.7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T12:32:38.431063](https://huggingface.co/datasets/open-llm-leaderboard/details_DopeorNope__SOLARC-M-10.7B/blob/main/results_2024-01-04T12-32-38.431063.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6669558592994752,
"acc_stderr": 0.03159525026454693,
"acc_norm": 0.667691393491765,
"acc_norm_stderr": 0.03223875437522202,
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7185061667944077,
"mc2_stderr": 0.015014851042298718
},
"harness|arc:challenge|25": {
"acc": 0.6851535836177475,
"acc_stderr": 0.01357265770308495,
"acc_norm": 0.71160409556314,
"acc_norm_stderr": 0.013238394422428173
},
"harness|hellaswag|10": {
"acc": 0.7133041226847242,
"acc_stderr": 0.004512940497462742,
"acc_norm": 0.8840868352917746,
"acc_norm_stderr": 0.003194665266078602
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.756578947368421,
"acc_stderr": 0.034923496688842384,
"acc_norm": 0.756578947368421,
"acc_norm_stderr": 0.034923496688842384
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.02863723563980089,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.02863723563980089
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956913,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956913
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736412,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736412
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.625531914893617,
"acc_stderr": 0.03163910665367291,
"acc_norm": 0.625531914893617,
"acc_norm_stderr": 0.03163910665367291
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6275862068965518,
"acc_stderr": 0.04028731532947558,
"acc_norm": 0.6275862068965518,
"acc_norm_stderr": 0.04028731532947558
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172534,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172534
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5024630541871922,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.5024630541871922,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8121212121212121,
"acc_stderr": 0.03050193405942914,
"acc_norm": 0.8121212121212121,
"acc_norm_stderr": 0.03050193405942914
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822516,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822516
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603348,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603348
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465073,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465073
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634332,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634332
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.03958027231121569,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.03958027231121569
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374308,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374308
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.033674621388960775,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.033674621388960775
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.02450980392156862,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.02450980392156862
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8481012658227848,
"acc_stderr": 0.023363878096632446,
"acc_norm": 0.8481012658227848,
"acc_norm_stderr": 0.023363878096632446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596915,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596915
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.0230866350868414,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.0230866350868414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8033205619412516,
"acc_stderr": 0.014214138556913917,
"acc_norm": 0.8033205619412516,
"acc_norm_stderr": 0.014214138556913917
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7572254335260116,
"acc_stderr": 0.023083658586984204,
"acc_norm": 0.7572254335260116,
"acc_norm_stderr": 0.023083658586984204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.39329608938547483,
"acc_stderr": 0.016337268694270105,
"acc_norm": 0.39329608938547483,
"acc_norm_stderr": 0.016337268694270105
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.729903536977492,
"acc_stderr": 0.02521804037341062,
"acc_norm": 0.729903536977492,
"acc_norm_stderr": 0.02521804037341062
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0227797190887334,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0227797190887334
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4934810951760104,
"acc_stderr": 0.012769150688867503,
"acc_norm": 0.4934810951760104,
"acc_norm_stderr": 0.012769150688867503
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7389705882352942,
"acc_stderr": 0.026679252270103128,
"acc_norm": 0.7389705882352942,
"acc_norm_stderr": 0.026679252270103128
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.018875682938069446,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.018875682938069446
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.03158149539338733,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.03158149539338733
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5716034271725826,
"mc1_stderr": 0.017323088597314747,
"mc2": 0.7185061667944077,
"mc2_stderr": 0.015014851042298718
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781091
},
"harness|gsm8k|5": {
"acc": 0.6542835481425322,
"acc_stderr": 0.013100422990441573
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Recruta/PedroAccioly | ---
license: openrail
---
|
CyberHarem/houjou_karen_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of houjou_karen/北条加蓮 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of houjou_karen/北条加蓮 (THE iDOLM@STER: Cinderella Girls), containing 500 images and their tags.
The core tags of this character are `brown_hair, brown_eyes, long_hair, breasts, bangs, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 696.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/houjou_karen_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 421.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/houjou_karen_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1216 | 882.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/houjou_karen_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 626.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/houjou_karen_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1216 | 1.18 GiB | [Download](https://huggingface.co/datasets/CyberHarem/houjou_karen_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/houjou_karen_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, smile, solo, blush, looking_at_viewer, open_mouth, microphone, earrings |
| 1 | 5 |  |  |  |  |  | 1girl, open_mouth, smile, solo, blush, looking_at_viewer, pantyhose, dress, one_eye_closed, scarf |
| 2 | 10 |  |  |  |  |  | dress, 1girl, blush, solo, bare_shoulders, looking_at_viewer, open_mouth, :d, earrings, choker, elbow_gloves, hair_flower |
| 3 | 11 |  |  |  |  |  | 1girl, blush, solo, school_uniform, smile, looking_at_viewer, necklace, skirt, twintails, cardigan, bag, drill_hair, open_mouth |
| 4 | 5 |  |  |  |  |  | blush, hair_flower, looking_at_viewer, 1girl, blue_bikini, cleavage, collarbone, frilled_bikini, large_breasts, navel, open_mouth, orange_hair, solo, :d, necklace, short_hair, side-tie_bikini_bottom, yellow_eyes |
| 5 | 10 |  |  |  |  |  | 1girl, blue_sky, cleavage, collarbone, day, solo, blush, cloud, looking_at_viewer, outdoors, floral_print, navel, white_bikini, open_mouth, necklace, ocean, side-tie_bikini_bottom, :d, water, arm_up, hair_flower, orange_hair, wading |
| 6 | 7 |  |  |  |  |  | 1girl, blush, looking_at_viewer, navel, nipples, female_pubic_hair, solo, smile, completely_nude, pussy, simple_background, white_background, collarbone, large_breasts |
| 7 | 5 |  |  |  |  |  | 1girl, blush, cloud, looking_at_viewer, outdoors, sky, solo, straw_hat, white_dress, day, smile, bare_shoulders, ocean, open_mouth, sundress, water, collarbone, flower, sun_hat, twintails, wet_clothes, wind_lift |
| 8 | 6 |  |  |  |  |  | 1girl, fake_animal_ears, looking_at_viewer, playboy_bunny, rabbit_ears, wrist_cuffs, black_leotard, blush, cleavage, detached_collar, smile, solo, strapless_leotard, bare_shoulders, black_bowtie, ass, black_pantyhose, closed_mouth, rabbit_tail |
| 9 | 5 |  |  |  |  |  | 1boy, 1girl, blush, girl_on_top, hetero, open_mouth, sex, vaginal, cowgirl_position, looking_at_viewer, navel, penis, pov, solo_focus, sweat, female_pubic_hair, large_breasts, nipples, orange_hair, bar_censor, bikini_bottom_aside, cum_in_pussy, heart, holding_hands, interlocked_fingers, smile, spread_legs, yellow_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | smile | solo | blush | looking_at_viewer | open_mouth | microphone | earrings | pantyhose | dress | one_eye_closed | scarf | bare_shoulders | :d | choker | elbow_gloves | hair_flower | school_uniform | necklace | skirt | twintails | cardigan | bag | drill_hair | blue_bikini | cleavage | collarbone | frilled_bikini | large_breasts | navel | orange_hair | short_hair | side-tie_bikini_bottom | yellow_eyes | blue_sky | day | cloud | outdoors | floral_print | white_bikini | ocean | water | arm_up | wading | nipples | female_pubic_hair | completely_nude | pussy | simple_background | white_background | sky | straw_hat | white_dress | sundress | flower | sun_hat | wet_clothes | wind_lift | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | black_leotard | detached_collar | strapless_leotard | black_bowtie | ass | black_pantyhose | closed_mouth | rabbit_tail | 1boy | girl_on_top | hetero | sex | vaginal | cowgirl_position | penis | pov | solo_focus | sweat | bar_censor | bikini_bottom_aside | cum_in_pussy | heart | holding_hands | interlocked_fingers | spread_legs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:--------|:--------------------|:-------------|:-------------|:-----------|:------------|:--------|:-----------------|:--------|:-----------------|:-----|:---------|:---------------|:--------------|:-----------------|:-----------|:--------|:------------|:-----------|:------|:-------------|:--------------|:-----------|:-------------|:-----------------|:----------------|:--------|:--------------|:-------------|:-------------------------|:--------------|:-----------|:------|:--------|:-----------|:---------------|:---------------|:--------|:--------|:---------|:---------|:----------|:--------------------|:------------------|:--------|:--------------------|:-------------------|:------|:------------|:--------------|:-----------|:---------|:----------|:--------------|:------------|:-------------------|:----------------|:--------------|:--------------|:----------------|:------------------|:--------------------|:---------------|:------|:------------------|:---------------|:--------------|:-------|:--------------|:---------|:------|:----------|:-------------------|:--------|:------|:-------------|:--------|:-------------|:----------------------|:---------------|:--------|:----------------|:----------------------|:--------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | X | X | X | X | | X | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | | X | X | X | X | | | | | | | | X | | | X | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | | X | X | X | X | | | | | | | | X | | | X | | X | | | | | | | X | X | | | X | X | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | X | X | X | X | X | | | | | | | X | | | | | | | | X | | | | | | X | | | | | | | | | X | X | X | | | X | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | X | X | X | X | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 9 | 5 |  |  |  |  |  | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | | | X | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
phatjk/wikipedia_vi | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: bm25_text
dtype: string
splits:
- name: train
num_bytes: 2889457164
num_examples: 1944406
download_size: 1242752879
dataset_size: 2889457164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikipedia_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabrielmbmb/ultrafeedback-prompts-judgelm-gpt35-with-principles | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
dtype: string
- name: generation_prompt
dtype: string
- name: raw_generation_responses
sequence: string
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: ratings
sequence: int64
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 14351059
num_examples: 1000
download_size: 6587567
dataset_size: 14351059
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ultrafeedback-prompts-judgelm-gpt35-with-principles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
EgilKarlsen/AA_DistilRoBERTa_FT3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: '0'
dtype: float32
- name: '1'
dtype: float32
- name: '2'
dtype: float32
- name: '3'
dtype: float32
- name: '4'
dtype: float32
- name: '5'
dtype: float32
- name: '6'
dtype: float32
- name: '7'
dtype: float32
- name: '8'
dtype: float32
- name: '9'
dtype: float32
- name: '10'
dtype: float32
- name: '11'
dtype: float32
- name: '12'
dtype: float32
- name: '13'
dtype: float32
- name: '14'
dtype: float32
- name: '15'
dtype: float32
- name: '16'
dtype: float32
- name: '17'
dtype: float32
- name: '18'
dtype: float32
- name: '19'
dtype: float32
- name: '20'
dtype: float32
- name: '21'
dtype: float32
- name: '22'
dtype: float32
- name: '23'
dtype: float32
- name: '24'
dtype: float32
- name: '25'
dtype: float32
- name: '26'
dtype: float32
- name: '27'
dtype: float32
- name: '28'
dtype: float32
- name: '29'
dtype: float32
- name: '30'
dtype: float32
- name: '31'
dtype: float32
- name: '32'
dtype: float32
- name: '33'
dtype: float32
- name: '34'
dtype: float32
- name: '35'
dtype: float32
- name: '36'
dtype: float32
- name: '37'
dtype: float32
- name: '38'
dtype: float32
- name: '39'
dtype: float32
- name: '40'
dtype: float32
- name: '41'
dtype: float32
- name: '42'
dtype: float32
- name: '43'
dtype: float32
- name: '44'
dtype: float32
- name: '45'
dtype: float32
- name: '46'
dtype: float32
- name: '47'
dtype: float32
- name: '48'
dtype: float32
- name: '49'
dtype: float32
- name: '50'
dtype: float32
- name: '51'
dtype: float32
- name: '52'
dtype: float32
- name: '53'
dtype: float32
- name: '54'
dtype: float32
- name: '55'
dtype: float32
- name: '56'
dtype: float32
- name: '57'
dtype: float32
- name: '58'
dtype: float32
- name: '59'
dtype: float32
- name: '60'
dtype: float32
- name: '61'
dtype: float32
- name: '62'
dtype: float32
- name: '63'
dtype: float32
- name: '64'
dtype: float32
- name: '65'
dtype: float32
- name: '66'
dtype: float32
- name: '67'
dtype: float32
- name: '68'
dtype: float32
- name: '69'
dtype: float32
- name: '70'
dtype: float32
- name: '71'
dtype: float32
- name: '72'
dtype: float32
- name: '73'
dtype: float32
- name: '74'
dtype: float32
- name: '75'
dtype: float32
- name: '76'
dtype: float32
- name: '77'
dtype: float32
- name: '78'
dtype: float32
- name: '79'
dtype: float32
- name: '80'
dtype: float32
- name: '81'
dtype: float32
- name: '82'
dtype: float32
- name: '83'
dtype: float32
- name: '84'
dtype: float32
- name: '85'
dtype: float32
- name: '86'
dtype: float32
- name: '87'
dtype: float32
- name: '88'
dtype: float32
- name: '89'
dtype: float32
- name: '90'
dtype: float32
- name: '91'
dtype: float32
- name: '92'
dtype: float32
- name: '93'
dtype: float32
- name: '94'
dtype: float32
- name: '95'
dtype: float32
- name: '96'
dtype: float32
- name: '97'
dtype: float32
- name: '98'
dtype: float32
- name: '99'
dtype: float32
- name: '100'
dtype: float32
- name: '101'
dtype: float32
- name: '102'
dtype: float32
- name: '103'
dtype: float32
- name: '104'
dtype: float32
- name: '105'
dtype: float32
- name: '106'
dtype: float32
- name: '107'
dtype: float32
- name: '108'
dtype: float32
- name: '109'
dtype: float32
- name: '110'
dtype: float32
- name: '111'
dtype: float32
- name: '112'
dtype: float32
- name: '113'
dtype: float32
- name: '114'
dtype: float32
- name: '115'
dtype: float32
- name: '116'
dtype: float32
- name: '117'
dtype: float32
- name: '118'
dtype: float32
- name: '119'
dtype: float32
- name: '120'
dtype: float32
- name: '121'
dtype: float32
- name: '122'
dtype: float32
- name: '123'
dtype: float32
- name: '124'
dtype: float32
- name: '125'
dtype: float32
- name: '126'
dtype: float32
- name: '127'
dtype: float32
- name: '128'
dtype: float32
- name: '129'
dtype: float32
- name: '130'
dtype: float32
- name: '131'
dtype: float32
- name: '132'
dtype: float32
- name: '133'
dtype: float32
- name: '134'
dtype: float32
- name: '135'
dtype: float32
- name: '136'
dtype: float32
- name: '137'
dtype: float32
- name: '138'
dtype: float32
- name: '139'
dtype: float32
- name: '140'
dtype: float32
- name: '141'
dtype: float32
- name: '142'
dtype: float32
- name: '143'
dtype: float32
- name: '144'
dtype: float32
- name: '145'
dtype: float32
- name: '146'
dtype: float32
- name: '147'
dtype: float32
- name: '148'
dtype: float32
- name: '149'
dtype: float32
- name: '150'
dtype: float32
- name: '151'
dtype: float32
- name: '152'
dtype: float32
- name: '153'
dtype: float32
- name: '154'
dtype: float32
- name: '155'
dtype: float32
- name: '156'
dtype: float32
- name: '157'
dtype: float32
- name: '158'
dtype: float32
- name: '159'
dtype: float32
- name: '160'
dtype: float32
- name: '161'
dtype: float32
- name: '162'
dtype: float32
- name: '163'
dtype: float32
- name: '164'
dtype: float32
- name: '165'
dtype: float32
- name: '166'
dtype: float32
- name: '167'
dtype: float32
- name: '168'
dtype: float32
- name: '169'
dtype: float32
- name: '170'
dtype: float32
- name: '171'
dtype: float32
- name: '172'
dtype: float32
- name: '173'
dtype: float32
- name: '174'
dtype: float32
- name: '175'
dtype: float32
- name: '176'
dtype: float32
- name: '177'
dtype: float32
- name: '178'
dtype: float32
- name: '179'
dtype: float32
- name: '180'
dtype: float32
- name: '181'
dtype: float32
- name: '182'
dtype: float32
- name: '183'
dtype: float32
- name: '184'
dtype: float32
- name: '185'
dtype: float32
- name: '186'
dtype: float32
- name: '187'
dtype: float32
- name: '188'
dtype: float32
- name: '189'
dtype: float32
- name: '190'
dtype: float32
- name: '191'
dtype: float32
- name: '192'
dtype: float32
- name: '193'
dtype: float32
- name: '194'
dtype: float32
- name: '195'
dtype: float32
- name: '196'
dtype: float32
- name: '197'
dtype: float32
- name: '198'
dtype: float32
- name: '199'
dtype: float32
- name: '200'
dtype: float32
- name: '201'
dtype: float32
- name: '202'
dtype: float32
- name: '203'
dtype: float32
- name: '204'
dtype: float32
- name: '205'
dtype: float32
- name: '206'
dtype: float32
- name: '207'
dtype: float32
- name: '208'
dtype: float32
- name: '209'
dtype: float32
- name: '210'
dtype: float32
- name: '211'
dtype: float32
- name: '212'
dtype: float32
- name: '213'
dtype: float32
- name: '214'
dtype: float32
- name: '215'
dtype: float32
- name: '216'
dtype: float32
- name: '217'
dtype: float32
- name: '218'
dtype: float32
- name: '219'
dtype: float32
- name: '220'
dtype: float32
- name: '221'
dtype: float32
- name: '222'
dtype: float32
- name: '223'
dtype: float32
- name: '224'
dtype: float32
- name: '225'
dtype: float32
- name: '226'
dtype: float32
- name: '227'
dtype: float32
- name: '228'
dtype: float32
- name: '229'
dtype: float32
- name: '230'
dtype: float32
- name: '231'
dtype: float32
- name: '232'
dtype: float32
- name: '233'
dtype: float32
- name: '234'
dtype: float32
- name: '235'
dtype: float32
- name: '236'
dtype: float32
- name: '237'
dtype: float32
- name: '238'
dtype: float32
- name: '239'
dtype: float32
- name: '240'
dtype: float32
- name: '241'
dtype: float32
- name: '242'
dtype: float32
- name: '243'
dtype: float32
- name: '244'
dtype: float32
- name: '245'
dtype: float32
- name: '246'
dtype: float32
- name: '247'
dtype: float32
- name: '248'
dtype: float32
- name: '249'
dtype: float32
- name: '250'
dtype: float32
- name: '251'
dtype: float32
- name: '252'
dtype: float32
- name: '253'
dtype: float32
- name: '254'
dtype: float32
- name: '255'
dtype: float32
- name: '256'
dtype: float32
- name: '257'
dtype: float32
- name: '258'
dtype: float32
- name: '259'
dtype: float32
- name: '260'
dtype: float32
- name: '261'
dtype: float32
- name: '262'
dtype: float32
- name: '263'
dtype: float32
- name: '264'
dtype: float32
- name: '265'
dtype: float32
- name: '266'
dtype: float32
- name: '267'
dtype: float32
- name: '268'
dtype: float32
- name: '269'
dtype: float32
- name: '270'
dtype: float32
- name: '271'
dtype: float32
- name: '272'
dtype: float32
- name: '273'
dtype: float32
- name: '274'
dtype: float32
- name: '275'
dtype: float32
- name: '276'
dtype: float32
- name: '277'
dtype: float32
- name: '278'
dtype: float32
- name: '279'
dtype: float32
- name: '280'
dtype: float32
- name: '281'
dtype: float32
- name: '282'
dtype: float32
- name: '283'
dtype: float32
- name: '284'
dtype: float32
- name: '285'
dtype: float32
- name: '286'
dtype: float32
- name: '287'
dtype: float32
- name: '288'
dtype: float32
- name: '289'
dtype: float32
- name: '290'
dtype: float32
- name: '291'
dtype: float32
- name: '292'
dtype: float32
- name: '293'
dtype: float32
- name: '294'
dtype: float32
- name: '295'
dtype: float32
- name: '296'
dtype: float32
- name: '297'
dtype: float32
- name: '298'
dtype: float32
- name: '299'
dtype: float32
- name: '300'
dtype: float32
- name: '301'
dtype: float32
- name: '302'
dtype: float32
- name: '303'
dtype: float32
- name: '304'
dtype: float32
- name: '305'
dtype: float32
- name: '306'
dtype: float32
- name: '307'
dtype: float32
- name: '308'
dtype: float32
- name: '309'
dtype: float32
- name: '310'
dtype: float32
- name: '311'
dtype: float32
- name: '312'
dtype: float32
- name: '313'
dtype: float32
- name: '314'
dtype: float32
- name: '315'
dtype: float32
- name: '316'
dtype: float32
- name: '317'
dtype: float32
- name: '318'
dtype: float32
- name: '319'
dtype: float32
- name: '320'
dtype: float32
- name: '321'
dtype: float32
- name: '322'
dtype: float32
- name: '323'
dtype: float32
- name: '324'
dtype: float32
- name: '325'
dtype: float32
- name: '326'
dtype: float32
- name: '327'
dtype: float32
- name: '328'
dtype: float32
- name: '329'
dtype: float32
- name: '330'
dtype: float32
- name: '331'
dtype: float32
- name: '332'
dtype: float32
- name: '333'
dtype: float32
- name: '334'
dtype: float32
- name: '335'
dtype: float32
- name: '336'
dtype: float32
- name: '337'
dtype: float32
- name: '338'
dtype: float32
- name: '339'
dtype: float32
- name: '340'
dtype: float32
- name: '341'
dtype: float32
- name: '342'
dtype: float32
- name: '343'
dtype: float32
- name: '344'
dtype: float32
- name: '345'
dtype: float32
- name: '346'
dtype: float32
- name: '347'
dtype: float32
- name: '348'
dtype: float32
- name: '349'
dtype: float32
- name: '350'
dtype: float32
- name: '351'
dtype: float32
- name: '352'
dtype: float32
- name: '353'
dtype: float32
- name: '354'
dtype: float32
- name: '355'
dtype: float32
- name: '356'
dtype: float32
- name: '357'
dtype: float32
- name: '358'
dtype: float32
- name: '359'
dtype: float32
- name: '360'
dtype: float32
- name: '361'
dtype: float32
- name: '362'
dtype: float32
- name: '363'
dtype: float32
- name: '364'
dtype: float32
- name: '365'
dtype: float32
- name: '366'
dtype: float32
- name: '367'
dtype: float32
- name: '368'
dtype: float32
- name: '369'
dtype: float32
- name: '370'
dtype: float32
- name: '371'
dtype: float32
- name: '372'
dtype: float32
- name: '373'
dtype: float32
- name: '374'
dtype: float32
- name: '375'
dtype: float32
- name: '376'
dtype: float32
- name: '377'
dtype: float32
- name: '378'
dtype: float32
- name: '379'
dtype: float32
- name: '380'
dtype: float32
- name: '381'
dtype: float32
- name: '382'
dtype: float32
- name: '383'
dtype: float32
- name: '384'
dtype: float32
- name: '385'
dtype: float32
- name: '386'
dtype: float32
- name: '387'
dtype: float32
- name: '388'
dtype: float32
- name: '389'
dtype: float32
- name: '390'
dtype: float32
- name: '391'
dtype: float32
- name: '392'
dtype: float32
- name: '393'
dtype: float32
- name: '394'
dtype: float32
- name: '395'
dtype: float32
- name: '396'
dtype: float32
- name: '397'
dtype: float32
- name: '398'
dtype: float32
- name: '399'
dtype: float32
- name: '400'
dtype: float32
- name: '401'
dtype: float32
- name: '402'
dtype: float32
- name: '403'
dtype: float32
- name: '404'
dtype: float32
- name: '405'
dtype: float32
- name: '406'
dtype: float32
- name: '407'
dtype: float32
- name: '408'
dtype: float32
- name: '409'
dtype: float32
- name: '410'
dtype: float32
- name: '411'
dtype: float32
- name: '412'
dtype: float32
- name: '413'
dtype: float32
- name: '414'
dtype: float32
- name: '415'
dtype: float32
- name: '416'
dtype: float32
- name: '417'
dtype: float32
- name: '418'
dtype: float32
- name: '419'
dtype: float32
- name: '420'
dtype: float32
- name: '421'
dtype: float32
- name: '422'
dtype: float32
- name: '423'
dtype: float32
- name: '424'
dtype: float32
- name: '425'
dtype: float32
- name: '426'
dtype: float32
- name: '427'
dtype: float32
- name: '428'
dtype: float32
- name: '429'
dtype: float32
- name: '430'
dtype: float32
- name: '431'
dtype: float32
- name: '432'
dtype: float32
- name: '433'
dtype: float32
- name: '434'
dtype: float32
- name: '435'
dtype: float32
- name: '436'
dtype: float32
- name: '437'
dtype: float32
- name: '438'
dtype: float32
- name: '439'
dtype: float32
- name: '440'
dtype: float32
- name: '441'
dtype: float32
- name: '442'
dtype: float32
- name: '443'
dtype: float32
- name: '444'
dtype: float32
- name: '445'
dtype: float32
- name: '446'
dtype: float32
- name: '447'
dtype: float32
- name: '448'
dtype: float32
- name: '449'
dtype: float32
- name: '450'
dtype: float32
- name: '451'
dtype: float32
- name: '452'
dtype: float32
- name: '453'
dtype: float32
- name: '454'
dtype: float32
- name: '455'
dtype: float32
- name: '456'
dtype: float32
- name: '457'
dtype: float32
- name: '458'
dtype: float32
- name: '459'
dtype: float32
- name: '460'
dtype: float32
- name: '461'
dtype: float32
- name: '462'
dtype: float32
- name: '463'
dtype: float32
- name: '464'
dtype: float32
- name: '465'
dtype: float32
- name: '466'
dtype: float32
- name: '467'
dtype: float32
- name: '468'
dtype: float32
- name: '469'
dtype: float32
- name: '470'
dtype: float32
- name: '471'
dtype: float32
- name: '472'
dtype: float32
- name: '473'
dtype: float32
- name: '474'
dtype: float32
- name: '475'
dtype: float32
- name: '476'
dtype: float32
- name: '477'
dtype: float32
- name: '478'
dtype: float32
- name: '479'
dtype: float32
- name: '480'
dtype: float32
- name: '481'
dtype: float32
- name: '482'
dtype: float32
- name: '483'
dtype: float32
- name: '484'
dtype: float32
- name: '485'
dtype: float32
- name: '486'
dtype: float32
- name: '487'
dtype: float32
- name: '488'
dtype: float32
- name: '489'
dtype: float32
- name: '490'
dtype: float32
- name: '491'
dtype: float32
- name: '492'
dtype: float32
- name: '493'
dtype: float32
- name: '494'
dtype: float32
- name: '495'
dtype: float32
- name: '496'
dtype: float32
- name: '497'
dtype: float32
- name: '498'
dtype: float32
- name: '499'
dtype: float32
- name: '500'
dtype: float32
- name: '501'
dtype: float32
- name: '502'
dtype: float32
- name: '503'
dtype: float32
- name: '504'
dtype: float32
- name: '505'
dtype: float32
- name: '506'
dtype: float32
- name: '507'
dtype: float32
- name: '508'
dtype: float32
- name: '509'
dtype: float32
- name: '510'
dtype: float32
- name: '511'
dtype: float32
- name: '512'
dtype: float32
- name: '513'
dtype: float32
- name: '514'
dtype: float32
- name: '515'
dtype: float32
- name: '516'
dtype: float32
- name: '517'
dtype: float32
- name: '518'
dtype: float32
- name: '519'
dtype: float32
- name: '520'
dtype: float32
- name: '521'
dtype: float32
- name: '522'
dtype: float32
- name: '523'
dtype: float32
- name: '524'
dtype: float32
- name: '525'
dtype: float32
- name: '526'
dtype: float32
- name: '527'
dtype: float32
- name: '528'
dtype: float32
- name: '529'
dtype: float32
- name: '530'
dtype: float32
- name: '531'
dtype: float32
- name: '532'
dtype: float32
- name: '533'
dtype: float32
- name: '534'
dtype: float32
- name: '535'
dtype: float32
- name: '536'
dtype: float32
- name: '537'
dtype: float32
- name: '538'
dtype: float32
- name: '539'
dtype: float32
- name: '540'
dtype: float32
- name: '541'
dtype: float32
- name: '542'
dtype: float32
- name: '543'
dtype: float32
- name: '544'
dtype: float32
- name: '545'
dtype: float32
- name: '546'
dtype: float32
- name: '547'
dtype: float32
- name: '548'
dtype: float32
- name: '549'
dtype: float32
- name: '550'
dtype: float32
- name: '551'
dtype: float32
- name: '552'
dtype: float32
- name: '553'
dtype: float32
- name: '554'
dtype: float32
- name: '555'
dtype: float32
- name: '556'
dtype: float32
- name: '557'
dtype: float32
- name: '558'
dtype: float32
- name: '559'
dtype: float32
- name: '560'
dtype: float32
- name: '561'
dtype: float32
- name: '562'
dtype: float32
- name: '563'
dtype: float32
- name: '564'
dtype: float32
- name: '565'
dtype: float32
- name: '566'
dtype: float32
- name: '567'
dtype: float32
- name: '568'
dtype: float32
- name: '569'
dtype: float32
- name: '570'
dtype: float32
- name: '571'
dtype: float32
- name: '572'
dtype: float32
- name: '573'
dtype: float32
- name: '574'
dtype: float32
- name: '575'
dtype: float32
- name: '576'
dtype: float32
- name: '577'
dtype: float32
- name: '578'
dtype: float32
- name: '579'
dtype: float32
- name: '580'
dtype: float32
- name: '581'
dtype: float32
- name: '582'
dtype: float32
- name: '583'
dtype: float32
- name: '584'
dtype: float32
- name: '585'
dtype: float32
- name: '586'
dtype: float32
- name: '587'
dtype: float32
- name: '588'
dtype: float32
- name: '589'
dtype: float32
- name: '590'
dtype: float32
- name: '591'
dtype: float32
- name: '592'
dtype: float32
- name: '593'
dtype: float32
- name: '594'
dtype: float32
- name: '595'
dtype: float32
- name: '596'
dtype: float32
- name: '597'
dtype: float32
- name: '598'
dtype: float32
- name: '599'
dtype: float32
- name: '600'
dtype: float32
- name: '601'
dtype: float32
- name: '602'
dtype: float32
- name: '603'
dtype: float32
- name: '604'
dtype: float32
- name: '605'
dtype: float32
- name: '606'
dtype: float32
- name: '607'
dtype: float32
- name: '608'
dtype: float32
- name: '609'
dtype: float32
- name: '610'
dtype: float32
- name: '611'
dtype: float32
- name: '612'
dtype: float32
- name: '613'
dtype: float32
- name: '614'
dtype: float32
- name: '615'
dtype: float32
- name: '616'
dtype: float32
- name: '617'
dtype: float32
- name: '618'
dtype: float32
- name: '619'
dtype: float32
- name: '620'
dtype: float32
- name: '621'
dtype: float32
- name: '622'
dtype: float32
- name: '623'
dtype: float32
- name: '624'
dtype: float32
- name: '625'
dtype: float32
- name: '626'
dtype: float32
- name: '627'
dtype: float32
- name: '628'
dtype: float32
- name: '629'
dtype: float32
- name: '630'
dtype: float32
- name: '631'
dtype: float32
- name: '632'
dtype: float32
- name: '633'
dtype: float32
- name: '634'
dtype: float32
- name: '635'
dtype: float32
- name: '636'
dtype: float32
- name: '637'
dtype: float32
- name: '638'
dtype: float32
- name: '639'
dtype: float32
- name: '640'
dtype: float32
- name: '641'
dtype: float32
- name: '642'
dtype: float32
- name: '643'
dtype: float32
- name: '644'
dtype: float32
- name: '645'
dtype: float32
- name: '646'
dtype: float32
- name: '647'
dtype: float32
- name: '648'
dtype: float32
- name: '649'
dtype: float32
- name: '650'
dtype: float32
- name: '651'
dtype: float32
- name: '652'
dtype: float32
- name: '653'
dtype: float32
- name: '654'
dtype: float32
- name: '655'
dtype: float32
- name: '656'
dtype: float32
- name: '657'
dtype: float32
- name: '658'
dtype: float32
- name: '659'
dtype: float32
- name: '660'
dtype: float32
- name: '661'
dtype: float32
- name: '662'
dtype: float32
- name: '663'
dtype: float32
- name: '664'
dtype: float32
- name: '665'
dtype: float32
- name: '666'
dtype: float32
- name: '667'
dtype: float32
- name: '668'
dtype: float32
- name: '669'
dtype: float32
- name: '670'
dtype: float32
- name: '671'
dtype: float32
- name: '672'
dtype: float32
- name: '673'
dtype: float32
- name: '674'
dtype: float32
- name: '675'
dtype: float32
- name: '676'
dtype: float32
- name: '677'
dtype: float32
- name: '678'
dtype: float32
- name: '679'
dtype: float32
- name: '680'
dtype: float32
- name: '681'
dtype: float32
- name: '682'
dtype: float32
- name: '683'
dtype: float32
- name: '684'
dtype: float32
- name: '685'
dtype: float32
- name: '686'
dtype: float32
- name: '687'
dtype: float32
- name: '688'
dtype: float32
- name: '689'
dtype: float32
- name: '690'
dtype: float32
- name: '691'
dtype: float32
- name: '692'
dtype: float32
- name: '693'
dtype: float32
- name: '694'
dtype: float32
- name: '695'
dtype: float32
- name: '696'
dtype: float32
- name: '697'
dtype: float32
- name: '698'
dtype: float32
- name: '699'
dtype: float32
- name: '700'
dtype: float32
- name: '701'
dtype: float32
- name: '702'
dtype: float32
- name: '703'
dtype: float32
- name: '704'
dtype: float32
- name: '705'
dtype: float32
- name: '706'
dtype: float32
- name: '707'
dtype: float32
- name: '708'
dtype: float32
- name: '709'
dtype: float32
- name: '710'
dtype: float32
- name: '711'
dtype: float32
- name: '712'
dtype: float32
- name: '713'
dtype: float32
- name: '714'
dtype: float32
- name: '715'
dtype: float32
- name: '716'
dtype: float32
- name: '717'
dtype: float32
- name: '718'
dtype: float32
- name: '719'
dtype: float32
- name: '720'
dtype: float32
- name: '721'
dtype: float32
- name: '722'
dtype: float32
- name: '723'
dtype: float32
- name: '724'
dtype: float32
- name: '725'
dtype: float32
- name: '726'
dtype: float32
- name: '727'
dtype: float32
- name: '728'
dtype: float32
- name: '729'
dtype: float32
- name: '730'
dtype: float32
- name: '731'
dtype: float32
- name: '732'
dtype: float32
- name: '733'
dtype: float32
- name: '734'
dtype: float32
- name: '735'
dtype: float32
- name: '736'
dtype: float32
- name: '737'
dtype: float32
- name: '738'
dtype: float32
- name: '739'
dtype: float32
- name: '740'
dtype: float32
- name: '741'
dtype: float32
- name: '742'
dtype: float32
- name: '743'
dtype: float32
- name: '744'
dtype: float32
- name: '745'
dtype: float32
- name: '746'
dtype: float32
- name: '747'
dtype: float32
- name: '748'
dtype: float32
- name: '749'
dtype: float32
- name: '750'
dtype: float32
- name: '751'
dtype: float32
- name: '752'
dtype: float32
- name: '753'
dtype: float32
- name: '754'
dtype: float32
- name: '755'
dtype: float32
- name: '756'
dtype: float32
- name: '757'
dtype: float32
- name: '758'
dtype: float32
- name: '759'
dtype: float32
- name: '760'
dtype: float32
- name: '761'
dtype: float32
- name: '762'
dtype: float32
- name: '763'
dtype: float32
- name: '764'
dtype: float32
- name: '765'
dtype: float32
- name: '766'
dtype: float32
- name: '767'
dtype: float32
- name: label
dtype: string
splits:
- name: train
num_bytes: 80318780.21618997
num_examples: 26057
- name: test
num_bytes: 26774087.073587257
num_examples: 8686
download_size: 147162702
dataset_size: 107092867.28977722
---
# Dataset Card for "AA_DistilRoBERTa_FT3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/chemistry_dataset_standardized_cluster_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 28959443
num_examples: 3030
download_size: 7966037
dataset_size: 28959443
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "chemistry_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irds/mr-tydi_te_test | ---
pretty_name: '`mr-tydi/te/test`'
viewer: false
source_datasets: ['irds/mr-tydi_te']
task_categories:
- text-retrieval
---
# Dataset Card for `mr-tydi/te/test`
The `mr-tydi/te/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=646
- `qrels`: (relevance assessments); count=677
- For `docs`, use [`irds/mr-tydi_te`](https://huggingface.co/datasets/irds/mr-tydi_te)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mr-tydi_te_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mr-tydi_te_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Zhang2021MrTyDi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{Clark2020TyDiQa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki},
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
```
|
kristmh/high_priority_or_not_high_2 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validate
path: data/validate-*
dataset_info:
features:
- name: text_clean
dtype: string
- name: label
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 78347
num_examples: 96
- name: train
num_bytes: 522155
num_examples: 768
- name: validate
num_bytes: 62391
num_examples: 96
download_size: 335552
dataset_size: 662893
---
# Dataset Card for "high_priority_or_not_high_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
irlab-udc/metahate | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
pretty_name: MetaHate
size_categories:
- 1M<n<10M
---
# MetaHate: A Dataset for Unifying Efforts on Hate Speech Detection
This is MetaHate: a meta-collection of 36 hate speech datasets from social media comments.
## Dataset Structure
The dataset contains 1,226,202 social media posts in a TSV file. Each element contains the following fields:
| Field Name | Type | Possible Values | Description |
|------------|------|-----------------|----------------------------------------------------------------------|
| text | str | any | Social media post. Each post is unique. |
| label | int | 0, 1 | Label of the post. 0 for non-hate speech posts, 1 for hate speech. |
## Usage
In order to use MetaHate you need to agree to our Terms and Conditions. Access to the complete meta-collection (1,226,202) will be granted only upon the submission of all relevant agreements for the derived datasets. Otherwise, we will only provide the access to the publicly available datasets (1,101,165 instances).
To access the full data, we require the original Terms of Use of the following works:
- [A Large Labeled Corpus for Online Harassment Research (Golbeck et al. 2017)](https://doi.org/10.1145/3091478.3091509)
- [The 'Call me sexist but' Dataset (Samory et al. 2021)](https://search.gesis.org/research_data/SDN-10.7802-2251)
- [Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter (Waseem 2016)](https://doi.org/10.18653/v1/W16-5618)
- [Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter (Waseem and Hovy 2016)](https://doi.org/10.18653/v1/N16-2013)
- [Aggression-annotated Corpus of Hindi-English Code-mixed Data (Kumar et al. 2018)](https://aclanthology.org/L18-1226)
- [#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement (Gautam et al. 2020)](https://doi.org/10.1609/icwsm.v14i1.7292)
- [Pinpointing Fine-Grained Relationships between Hateful Tweets and Replies (Albanyan and Blanco 2022)](https://doi.org/10.1609/aaai.v36i10.21284)
- [Large-Scale Hate Speech Detection with Cross-Domain Transfer (Toraman, Şahinuç, and Yilmaz 2022)](https://aclanthology.org/2022.lrec-1.238)
- [Developing a Multilingual Annotated Corpus of Misogyny and Aggression (Bhattacharya et al. 2020)](https://aclanthology.org/2020.trac-1.25)
## Disclaimer
This dataset includes content that may contain hate speech, offensive language, or other forms of inappropriate and objectionable material. The content present in the dataset is not created or endorsed by the authors or contributors of this project. It is collected from various sources and does not necessarily reflect the views or opinions of the project maintainers.
The purpose of using this dataset is for research, analysis, or educational purposes only. The authors do not endorse or promote any harmful, discriminatory, or offensive behaviour conveyed in the dataset.
Users are advised to exercise caution and sensitivity when interacting with or interpreting the dataset. If you choose to use the dataset, it is recommended to handle the content responsibly and in compliance with ethical guidelines and applicable laws.
The project maintainers disclaim any responsibility for the content within the dataset and cannot be held liable for how it is used or interpreted by others.
## Citation
If you use this dataset, please cite the following reference:
```bibtex
@misc{piot2024metahate,
title={MetaHate: A Dataset for Unifying Efforts on Hate Speech Detection},
author={Paloma Piot and Patricia Martín-Rodilla and Javier Parapar},
year={2024},
eprint={2401.06526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
The authors thank the funding from the Horizon Europe research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 101073351. The authors also thank the financial support supplied by the Consellería de Cultura, Educación, Formación Profesional e Universidades (accreditation 2019-2022 ED431G/01, ED431B 2022/33) and the European Regional Development Fund, which acknowledges the CITIC Research Center in ICT of the University of A Coruña as a Research Center of the Galician University System and the project PID2022-137061OB-C21 (Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Proyectos de Generación de Conocimiento; supported by the European Regional Development Fund). The authors also thank the funding of project PLEC2021-007662 (MCIN/AEI/10.13039/501100011033, Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación, Plan de Recuperación, Transformación y Resiliencia, Unión Europea-Next Generation EU). |
OOPPEENN/Oyama_Mahiro | ---
license: gpl-3.0
---
|
UCLNLP/adversarial_qa | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: adversarialqa
pretty_name: adversarialQA
dataset_info:
- config_name: adversarialQA
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 27858686
num_examples: 30000
- name: validation
num_bytes: 2757092
num_examples: 3000
- name: test
num_bytes: 2919479
num_examples: 3000
download_size: 5301049
dataset_size: 33535257
- config_name: dbert
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9345521
num_examples: 10000
- name: validation
num_bytes: 918156
num_examples: 1000
- name: test
num_bytes: 971290
num_examples: 1000
download_size: 2689032
dataset_size: 11234967
- config_name: dbidaf
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9282482
num_examples: 10000
- name: validation
num_bytes: 917907
num_examples: 1000
- name: test
num_bytes: 946947
num_examples: 1000
download_size: 2721341
dataset_size: 11147336
- config_name: droberta
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: metadata
struct:
- name: split
dtype: string
- name: model_in_the_loop
dtype: string
splits:
- name: train
num_bytes: 9270683
num_examples: 10000
- name: validation
num_bytes: 925029
num_examples: 1000
- name: test
num_bytes: 1005242
num_examples: 1000
download_size: 2815452
dataset_size: 11200954
configs:
- config_name: adversarialQA
data_files:
- split: train
path: adversarialQA/train-*
- split: validation
path: adversarialQA/validation-*
- split: test
path: adversarialQA/test-*
- config_name: dbert
data_files:
- split: train
path: dbert/train-*
- split: validation
path: dbert/validation-*
- split: test
path: dbert/test-*
- config_name: dbidaf
data_files:
- split: train
path: dbidaf/train-*
- split: validation
path: dbidaf/validation-*
- split: test
path: dbidaf/test-*
- config_name: droberta
data_files:
- split: train
path: droberta/train-*
- split: validation
path: droberta/validation-*
- split: test
path: droberta/test-*
train-eval-index:
- config: adversarialQA
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for adversarialQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [adversarialQA homepage](https://adversarialqa.github.io/)
- **Repository:** [adversarialQA repository](https://github.com/maxbartolo/adversarialQA)
- **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293)
- **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "Oxygen",
"paragraphs": [
{
"context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.",
"qas": [
{
"id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3",
"question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?",
"answers": [
{
"answer_start": 36,
"text": "organic compounds"
}
]
},
{
"id": "4240a8e708c703796347a3702cf1463eed05584a",
"question": "What letter does the abbreviation for acid anhydrides both begin and end in?",
"answers": [
{
"answer_start": 244,
"text": "R"
}
]
},
{
"id": "0681a0a5ec852ec6920d6a30f7ef65dced493366",
"question": "Which of the organic compounds, in the article, contains nitrogen?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
},
{
"id": "2990efe1a56ccf81938fa5e18104f7d3803069fb",
"question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: the title of the Wikipedia page from which the context is sourced
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the [website](https://dynabench.org/tasks/2#1).
### Data Splits
The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.
## Dataset Creation
### Curation Rationale
This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.
### Annotations
#### Annotation process
The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.
#### Who are the annotators?
The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).
### Licensing Information
This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{bartolo2020beat,
author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {662-678},
year = {2020},
doi = {10.1162/tacl\_a\_00338},
URL = { https://doi.org/10.1162/tacl_a_00338 },
eprint = { https://doi.org/10.1162/tacl_a_00338 },
abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). }
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset. |
Nexdata/602_People_3010_Images_Multi_Races_Human_Body_Semantic_Segmentation_Data | ---
license: cc-by-nc-nd-4.0
---
## Description
602 People –3,010 Images Multi-Races Human Body Semantic Segmentation Data,The data diversity includes headphones, body, background,and glasses.In terms of annotation, we adpoted segmentation annotations on headphones, body, background and glasses.The data can be used for tasks such as human body segmentation and the behavior detection of Video conference.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1181?source=Huggingface
## Data size
602 people, 5 images for each person
## Collection environment
Office, coffee shop, supermarket, apartment
## Race distribution
151 Asian people, 151 black people, 150 Caucasians people, 150 brown people ,ranging from teenager to middle-aged people, (Aged between 16 and 60)
## Gender distribution
301 males, 301 females
## Data diversity
different poses, different ages, different races, different collection backgrounds
## Device
computer, cellphone
## Collecting angles
eye-level angle
## Data format
the image data format is .jpg, the annotation file (mask) format is .png
## Annotation content
segmentation annotation of headphones, body, background, glasses
## Accuracy
based on the accuracy of the actions, the accuracy is more than 97%; Accuracy of semantic
# Licensing Information
Commercial License
|
tyzhu/wikitext-103-raw-v1-shuffled | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 544387646
num_examples: 1801350
- name: validation
num_bytes: 1159288
num_examples: 3760
download_size: 316392316
dataset_size: 545546934
---
# Dataset Card for "wikitext-103-raw-v1-shuffled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/theapothecarydiaries | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of The Apothecary Diaries
This is the image base of bangumi The Apothecary Diaries, we detected 88 characters, 8935 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 2852 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 181 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 149 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 125 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 12 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 53 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 192 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 17 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 19 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 8 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 38 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 25 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 6 | [Download](12/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 13 | 9 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 16 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 565 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 187 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 61 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 305 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 225 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 35 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 190 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 42 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 1010 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 68 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 22 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 22 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 45 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 27 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 231 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 16 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 76 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 232 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 16 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 97 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 26 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 34 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 60 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 90 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 30 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 53 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 35 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 28 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 49 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 29 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 121 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 54 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 64 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 11 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 9 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 30 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 16 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 9 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 11 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 13 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 78 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 9 | [Download](56/dataset.zip) |  |  |  |  |  |  |  |  |
| 57 | 114 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 8 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 18 | [Download](59/dataset.zip) |  |  |  |  |  |  |  |  |
| 60 | 17 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 14 | [Download](61/dataset.zip) |  |  |  |  |  |  |  |  |
| 62 | 7 | [Download](62/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 63 | 22 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 12 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| 65 | 14 | [Download](65/dataset.zip) |  |  |  |  |  |  |  |  |
| 66 | 35 | [Download](66/dataset.zip) |  |  |  |  |  |  |  |  |
| 67 | 72 | [Download](67/dataset.zip) |  |  |  |  |  |  |  |  |
| 68 | 13 | [Download](68/dataset.zip) |  |  |  |  |  |  |  |  |
| 69 | 20 | [Download](69/dataset.zip) |  |  |  |  |  |  |  |  |
| 70 | 11 | [Download](70/dataset.zip) |  |  |  |  |  |  |  |  |
| 71 | 8 | [Download](71/dataset.zip) |  |  |  |  |  |  |  |  |
| 72 | 16 | [Download](72/dataset.zip) |  |  |  |  |  |  |  |  |
| 73 | 23 | [Download](73/dataset.zip) |  |  |  |  |  |  |  |  |
| 74 | 14 | [Download](74/dataset.zip) |  |  |  |  |  |  |  |  |
| 75 | 7 | [Download](75/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 76 | 8 | [Download](76/dataset.zip) |  |  |  |  |  |  |  |  |
| 77 | 12 | [Download](77/dataset.zip) |  |  |  |  |  |  |  |  |
| 78 | 74 | [Download](78/dataset.zip) |  |  |  |  |  |  |  |  |
| 79 | 14 | [Download](79/dataset.zip) |  |  |  |  |  |  |  |  |
| 80 | 10 | [Download](80/dataset.zip) |  |  |  |  |  |  |  |  |
| 81 | 163 | [Download](81/dataset.zip) |  |  |  |  |  |  |  |  |
| 82 | 15 | [Download](82/dataset.zip) |  |  |  |  |  |  |  |  |
| 83 | 9 | [Download](83/dataset.zip) |  |  |  |  |  |  |  |  |
| 84 | 66 | [Download](84/dataset.zip) |  |  |  |  |  |  |  |  |
| 85 | 9 | [Download](85/dataset.zip) |  |  |  |  |  |  |  |  |
| 86 | 12 | [Download](86/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 65 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
DBQ/Chanel.Product.prices.Germany | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: Germany - Chanel - Product-level price list
tags:
- webscraping
- ecommerce
- Chanel
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 777798
num_examples: 1428
download_size: 199720
dataset_size: 777798
---
# Chanel web scraped data
## About the website
In the EMEA region, particularly in **Germany**, the luxury fashion industry greatly evolved over the years. The **luxury fashion industry** features renowned brands such as **Chanel**, among others. Consumers in Germany exhibit a strong appetite for high-end fashion, thereby intensifying the competition in this sector. With the advent of digitization, many of these high-end brands have shifted to online platforms. Especially during the COVID-19 pandemic, **Ecommerce** has become pivotal in this industry. The dataset observed comprises the **Ecommerce product-list page (PLP) data** on Chanels product availability, prices, and sales in Germany, providing a detailed perspective of the brands digital market status.
## Link to **dataset**
[Germany - Chanel - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Chanel%20Product-prices%20Germany/r/recTZkFTUHIvrjoB3)
|
YufeiHFUT/BioRED_multipleSentence_finalAnswer | ---
dataset_info:
features:
- name: data
dtype: string
splits:
- name: train
num_bytes: 16094857
num_examples: 3831
- name: validation
num_bytes: 4920945
num_examples: 1114
- name: test
num_bytes: 4262122
num_examples: 990
download_size: 2711010
dataset_size: 25277924
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
one-sec-cv12/chunk_169 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 23335822080.0
num_examples: 242960
download_size: 21149772594
dataset_size: 23335822080.0
---
# Dataset Card for "chunk_169"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713217220 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 25539
num_examples: 66
download_size: 21689
dataset_size: 25539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713217220"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nikchar/Large_training_set_40kclaims | ---
dataset_info:
features:
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
splits:
- name: train
num_bytes: 3252366
num_examples: 39752
download_size: 1954676
dataset_size: 3252366
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Large_training_set_40kclaims"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Seanxh/twitter_dataset_1713190888 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 44541
num_examples: 102
download_size: 21386
dataset_size: 44541
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ibranze/araproje_mmlu_en_conf_mgpt_nearestscore_true_x | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: validation
num_bytes: 130579.0
num_examples: 250
download_size: 79223
dataset_size: 130579.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_mmlu_en_conf_mgpt_nearestscore_true_x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shesselmans/guanaco-llama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1655208
num_examples: 1000
download_size: 966969
dataset_size: 1655208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
adamo1139/basic_economics_questions_ts_test_3 | ---
license: apache-2.0
---
|
HuggingFaceM4/coco_support_query_sets | Invalid username or password. |
nelsntk/mtg-data | ---
language: en
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: rulings
num_bytes: 8005972
num_examples: 26718
- name: scraped
num_bytes: 11066974
num_examples: 14255
- name: rules_qna
num_bytes: 150903
num_examples: 493
- name: glossary
num_bytes: 110922
num_examples: 654
- name: rules
num_bytes: 1006878
num_examples: 2840
download_size: 10395273
dataset_size: 20341649
---
# Dataset Card for "mtg-data"
### Dataset Summary
The "mtg-data" dataset is a collection of prompts and responses related to Magic: The Gathering (MTG), a popular collectible card game.
The dataset contains various types of question and answer pairs, including official rulings as responses and corresponding questions generated by GPT-3.5,
Q&A data scraped from the web, glossary terms alongside their descriptions, and official rules formatted into Q/A pairs.
This dataset is designed to facilitate the development and research of AI models focused on understanding game dynamics, card interactions,
and providing judge rulings. |
CyberHarem/isolated_island_oni_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of isolated_island_oni/離島棲鬼 (Kantai Collection)
This is the dataset of isolated_island_oni/離島棲鬼 (Kantai Collection), containing 41 images and their tags.
The core tags of this character are `black_hair, long_hair, red_eyes, horns, pale_skin, very_long_hair, bow, glowing_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 41 | 47.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isolated_island_oni_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 41 | 33.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isolated_island_oni_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 95 | 60.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isolated_island_oni_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 41 | 44.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isolated_island_oni_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 95 | 74.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isolated_island_oni_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/isolated_island_oni_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 34 |  |  |  |  |  | abyssal_ship, gothic_lolita, 1girl, looking_at_viewer, solo, smile, bonnet, black_dress, glowing, detached_sleeves, black_pantyhose |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | abyssal_ship | gothic_lolita | 1girl | looking_at_viewer | solo | smile | bonnet | black_dress | glowing | detached_sleeves | black_pantyhose |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------|:----------------|:--------|:--------------------|:-------|:--------|:---------|:--------------|:----------|:-------------------|:------------------|
| 0 | 34 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X |
|
Elain-q/dolma_low_quality_data | ---
license: apache-2.0
---
|
joey234/mmlu-logical_fallacies-rule-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 50750
num_examples: 163
download_size: 22649
dataset_size: 50750
---
# Dataset Card for "mmlu-logical_fallacies-rule-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GoodiniSu/toma | ---
license: mit
---
|
bilgeyucel/seven-wonders | ---
language:
- en
size_categories:
- n<1K
--- |
cheulyop/ksponspeech | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for [KsponSpeech]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
KsponSpeech is a large-scale spontaneous speech corpus of Korean conversations. This corpus contains 969 hrs of general open-domain dialog utterances, spoken by about 2,000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. KsponSpeech is publicly available on an open data hub site of the Korea government. (https://aihub.or.kr/aidata/105)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.