datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
dajor85570/invoices-and-receipts_ocr_v1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: parsed_data
dtype: string
- name: raw_data
dtype: string
splits:
- name: train
num_bytes: 465061949.289
num_examples: 2043
- name: test
num_bytes: 23808463.0
num_examples: 125
- name: valid
num_bytes: 22325731.0
num_examples: 70
download_size: 281665599
dataset_size: 511196143.289
---
# Dataset Card for "invoices-and-receipts_ocr_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
texonom/md-chroma-instructor-xl | ---
license: mit
---
embeddings vector generated by `hkunlp/instructor-xl` |
presencesw/dataset_2000_complexquestion_0 | ---
dataset_info:
features:
- name: entities
sequence: 'null'
- name: triplets
sequence: 'null'
- name: answer
dtype: string
- name: complex_question
dtype: string
splits:
- name: train
num_bytes: 17893
num_examples: 200
download_size: 0
dataset_size: 17893
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_2000_complexquestion_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YBXL/JAMA_Reasoning_Rare_train | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3202297
num_examples: 831
- name: valid
num_bytes: 3202297
num_examples: 831
- name: test
num_bytes: 3202297
num_examples: 831
download_size: 4684119
dataset_size: 9606891
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-596cbd-1668659072 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: ['f1', 'perplexity']
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ddcas](https://huggingface.co/ddcas) for evaluating this model. |
autoevaluate/autoeval-staging-eval-project-48057538-ec1b-4e18-ac2b-35070fb8202e-3735 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/d0ea767e | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 180
num_examples: 10
download_size: 1325
dataset_size: 180
---
# Dataset Card for "d0ea767e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
M-AI-C/bukhari_en | ---
dataset_info:
features:
- name: reference
dtype: string
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 13277855
num_examples: 7277
download_size: 5245942
dataset_size: 13277855
---
# Dataset Card for "bukhari_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Mandarin_Interactive_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Mandarin_Interactive_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/981?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Mandarin home interaction mobile phone language audio data (Far-field home collected audio data subset), with duration of 849 hours, recorded in the real home scene; content focuses on home instructions, functional assistants and wake-up words, specially designed for smart home, more close to data application scenes.
For more details, please refer to the link: https://www.nexdata.ai/datasets/981?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
ShrinivasSK/en_kn_3 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 3976936.2
num_examples: 18000
- name: test
num_bytes: 441881.8
num_examples: 2000
download_size: 2363947
dataset_size: 4418818.0
---
# Dataset Card for "data_kn_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hle2000/Mintaka_Graph_Features_Updated_T5-large-ssm | ---
dataset_info:
features:
- name: question
dtype: string
- name: question_answer
dtype: string
- name: num_nodes
dtype: int64
- name: num_edges
dtype: int64
- name: density
dtype: float64
- name: cycle
dtype: int64
- name: bridge
dtype: int64
- name: katz_centrality
dtype: float64
- name: page_rank
dtype: float64
- name: avg_ssp_length
dtype: float64
- name: determ_sequence
dtype: string
- name: gap_sequence
dtype: string
- name: g2t_sequence
dtype: string
- name: determ_sequence_embedding
dtype: string
- name: gap_sequence_embedding
dtype: string
- name: g2t_sequence_embedding
dtype: string
- name: question_answer_embedding
dtype: string
- name: tfidf_vector
dtype: string
- name: correct
dtype: float64
splits:
- name: train
num_bytes: 10203485753
num_examples: 90261
- name: test
num_bytes: 2579614925
num_examples: 22772
download_size: 2782389958
dataset_size: 12783100678
---
# Dataset Card for "Mintaka_Graph_Features_Updated_T5-large-ssm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
suvadityamuk/keras-team-keras-cv-code-dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: file_path
dtype: string
- name: repo_id
dtype: string
- name: token_count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3151777
num_examples: 490
download_size: 0
dataset_size: 3151777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "keras-team-keras-cv-code-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Wikit/nlu-covid | ---
license: apache-2.0
task_categories:
- text-classification
language:
- fr
---
French benchmark of NLU services for employee support use case during covid-19 pandemic.
These datasets were created by the Wikit team in order to compare the performances of NLU tools on the French language.
The dataset use case is employee support during the covid 19 pandemic. The intents were defined to answer department employees' questions on the evolution of work conditions related to the crisis.
- The training_dataset.csv file contains training utterances with associated intent used to train NLU services.
- The test_dataset.csv file contains test utterances with associated intent used to test NLU services.
To use this work, please cite :
> Marion Schaeffer, Christophe Bouvard. Comparaison des solutions de NLU sur un corpus français pour un chatbot de support COVID-19. IC 2022 - Journées francophones d’Ingénierie des Connaissances, Plate-Forme Intelligence Artificielle (PFIA'22), Jun 2022, Saint-Etienne, France. pp.199-208. ⟨hal-03727958⟩
|
awilliamson/dribble-preprocessed2 | ---
dataset_info:
features:
- name: teams
sequence:
sequence: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 137545543.59759423
num_examples: 17986
download_size: 28013091
dataset_size: 137545543.59759423
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dribble-preprocessed2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/enwiki-v2_valid-Sample | Invalid username or password. |
jaegerking/gimpy-test1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245925
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Aditya149/ultrachat | ---
dataset_info:
features:
- name: text
dtype: string
- name: lable
dtype: string
splits:
- name: train
num_bytes: 8258050042
num_examples: 5084540
- name: test
num_bytes: 917871375
num_examples: 564949
download_size: 5255674571
dataset_size: 9175921417
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/scar_h_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of scar_h/SCAR-H (Girls' Frontline)
This is the dataset of scar_h/SCAR-H (Girls' Frontline), containing 20 images and their tags.
The core tags of this character are `bangs, long_hair, blonde_hair, blue_eyes, hat, ponytail, white_headwear, baseball_cap, breasts, brown_hair, purple_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 20 | 25.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/scar_h_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 20 | 13.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/scar_h_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 49 | 30.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/scar_h_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 20 | 22.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/scar_h_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 49 | 43.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/scar_h_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/scar_h_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | blue_gloves, 1girl, solo, assault_rifle, black_jacket, feet_out_of_frame, holding_gun, looking_at_viewer, white_background, long_sleeves, midriff, navel, pants, simple_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blue_gloves | 1girl | solo | assault_rifle | black_jacket | feet_out_of_frame | holding_gun | looking_at_viewer | white_background | long_sleeves | midriff | navel | pants | simple_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:--------|:-------|:----------------|:---------------|:--------------------|:--------------|:--------------------|:-------------------|:---------------|:----------|:--------|:--------|:--------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
CyberHarem/kazuki_kuwanomi_plasticmemories | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Kazuki Kuwanomi (Plastic Memories)
This is the dataset of Kazuki Kuwanomi (Plastic Memories), containing 139 images and their tags.
The core tags of this character are `red_hair, purple_eyes, folded_ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 139 | 103.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazuki_kuwanomi_plasticmemories/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 139 | 82.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazuki_kuwanomi_plasticmemories/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 283 | 154.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazuki_kuwanomi_plasticmemories/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 139 | 103.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazuki_kuwanomi_plasticmemories/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 283 | 185.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kazuki_kuwanomi_plasticmemories/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kazuki_kuwanomi_plasticmemories',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, blush, wine_glass, food, solo_focus |
| 1 | 10 |  |  |  |  |  | 1girl, solo, hair_between_eyes, portrait, closed_mouth, anime_coloring, looking_at_viewer, bangs |
| 2 | 5 |  |  |  |  |  | 1girl, anime_coloring, looking_at_viewer, parody, solo, open_mouth, bangs |
| 3 | 5 |  |  |  |  |  | 1girl, closed_mouth, looking_at_viewer, solo, collarbone, hair_between_eyes, upper_body, short_hair, sleeveless_shirt, yellow_shirt |
| 4 | 17 |  |  |  |  |  | 1girl, solo, detached_sleeves, sitting, short_hair |
| 5 | 5 |  |  |  |  |  | 1girl, crossed_arms, solo, necktie, school_uniform, sleeveless, breasts, looking_at_viewer, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | wine_glass | food | solo_focus | solo | hair_between_eyes | portrait | closed_mouth | anime_coloring | looking_at_viewer | bangs | parody | open_mouth | collarbone | upper_body | short_hair | sleeveless_shirt | yellow_shirt | detached_sleeves | sitting | crossed_arms | necktie | school_uniform | sleeveless | breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:-------------|:-------|:--------------------|:-----------|:---------------|:-----------------|:--------------------|:--------|:---------|:-------------|:-------------|:-------------|:-------------|:-------------------|:---------------|:-------------------|:----------|:---------------|:----------|:-----------------|:-------------|:----------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | | | X | | | | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | X | X | | X | | X | | | | X | X | X | X | X | | | | | | | |
| 4 | 17 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | X | | | X | X | | | | | |
| 5 | 5 |  |  |  |  |  | X | | | | | X | | | | | X | | | | | X | | | | | | X | X | X | X | X |
|
JACINTO223/zorilho | ---
license: openrail
---
|
vbrydik/ukr-male-speaker-0-v0-vits | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 410136063.252
num_examples: 2292
- name: test
num_bytes: 19236183.0
num_examples: 100
download_size: 459440269
dataset_size: 429372246.252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Softage-AI/rlhf-ranking_dataset | ---
license: mit
language:
- en
---
# RLHF Response Ranking Dataset
## Description
This dataset supports research in Response Ranking for Large Language Models (RLHF) in the CODE & STEM domain.
It contains 500 prompt-response pairs, each with the following data attributes:
- M_Id & S.No.: Unique identifier for the prompt-response pair.
- Prompt: The original query or problem statement.
- Response 1 & 2: Responses generated by different language models.
- prompt_type: Category of the prompt (e.g., mathematical equation, coding problem).
- Preference: Indicates which response is considered better (1 or 2).
- Remark: Additional information about the ranking decision.
- Safety labels (all Y/N):
- Fails to follow instructions
- Contains sexual content
- Contains violent content
- Encourages harmful behavior
- Expresses moral judgment
- Gives harmful advice
## Dataset Source
This dataset is curated by the delivery team @SoftAge
## Limitations and Biases
- This dataset might not capture the full diversity of CODE & STEM problems and response qualities.
- Preference labels and safety ratings might reflect the inherent biases of human annotators or domain experts.
## Potential Uses
• Training and analysing RLHF models for generating informative and safe responses in the CODE & STEM domain.
• Identifying areas for improvement in language models.
• Developing new metrics and methods for RLHF in different domains. |
SinKove/synthetic_brain_mri | ---
license: openrail
task_categories:
- image-classification
language:
- en
tags:
- medical
- brain-data
- mri
pretty_name: Brain imaging generation with Latent Diffusion Models
size_categories:
- n<1K
---
# Dataset Card for Brain imaging generation with Latent Diffusion Models
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amigo homepage](https://amigos.ai/)
- **Paper:** [Brain imaging generation with Latent Diffusion Models](https://arxiv.org/abs/2209.07162)
- **Point of Contact:** [Walter H. L. Pinaya](mailto:walter.diaz_sanz@kcl.ac.uk)
### Dataset Summary
This dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group -
AMIGO (https://amigos.ai/). It consists on of 1,000 synthetic T1w images sampled from generative models trained on
data originally from the UK Biobank dataset (https://www.ukbiobank.ac.uk/).
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `prompt_age`: a float value used during the sampling to specify the age of the generated brain image (defined in years)
- `prompt_sex`: a string used during the sampling to specify the sex ("M" for male and "F" for female)
- `prompt_ventricular_volume`: a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)
- `prompt_brain_volume`: a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
The "Brain imaging generation with Latent Diffusion Models" dataset is released under the [OpenRAIL License](https://huggingface.co/blog/open_rail).
### Citation Information
```
@inproceedings{pinaya2022brain,
title={Brain imaging generation with latent diffusion models},
author={Pinaya, Walter HL and Tudosiu, Petru-Daniel and Dafflon, Jessica and Da Costa, Pedro F and Fernandez, Virginia and Nachev, Parashkev and Ourselin, Sebastien and Cardoso, M Jorge},
booktitle={MICCAI Workshop on Deep Generative Models},
pages={117--126},
year={2022},
organization={Springer}
}
```
### Contributions
Thanks to [@Warvito](https://github.com/Warvito) for adding this dataset. |
mwalmsley/gz_rings_overcomplicated | ---
dataset_info:
- config_name: classification
features:
- name: image
dtype: image
- name: majority_vote
dtype:
class_label:
names:
'0': not_ring
'1': ring
splits:
- name: train
num_bytes: 12455441
num_examples: 73356
- name: test
num_bytes: 3114071
num_examples: 18340
download_size: 3996794243
dataset_size: 15569512
- config_name: regression
features:
- name: image
dtype: image
- name: ring_vote_fraction
dtype: float32
splits:
- name: train
num_bytes: 12162017
num_examples: 73356
- name: test
num_bytes: 3040711
num_examples: 18340
download_size: 3996794243
dataset_size: 15202728
---
## Usage Examples
```
kwargs = dict(token=True, trust_remote_code=True)
# dataset_name = 'foundation/datasets/gz_rings.py' # local debug
dataset_name = 'mwalmsley/gz_rings' # remote
# Load a dataset and print the first example in the training set
dataset = datasets.load_dataset(dataset_name, 'classification', **kwargs)
print(dataset['train'][0])
# Load a dataset and print the first example in the test set
dataset = datasets.load_dataset(dataset_name, 'classification', **kwargs)
print(dataset['test'][0])
# Load a dataset and print the first example in the training set
dataset = datasets.load_dataset(dataset_name, 'regression', **kwargs)
print(dataset['train'][0])
# Load a dataset and print the first example in the test set
dataset = datasets.load_dataset(dataset_name, 'regression', **kwargs)
print(dataset['test'][0])
``` |
ragha92/FNS_Summarization | ---
license: mit
language:
- en
tags:
- finance
pretty_name: \
task_categories:
- summarization
--- |
aasem/iqbal-qa | ---
license: apache-2.0
---
|
luizlzg/drbyte_longanswer | ---
task_categories:
- text-generation
language:
- pt
tags:
- medical
- biology
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: drbyte_ptbr_teste*
--- |
dutchnaoteam/spl-object-v4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: objects
struct:
- name: bboxes
sequence:
sequence: float64
- name: categories
sequence: int64
splits:
- name: train
num_bytes: 510519286.6672428
num_examples: 2919
- name: validation
num_bytes: 70359982.1068786
num_examples: 365
- name: test
num_bytes: 62908276.10687859
num_examples: 365
download_size: 674287891
dataset_size: 643787544.881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ptx0/photo-concept-bucket | ---
license: openrail++
---
## Photo Concept Bucket
The purpose of this dataset was to distribute a high quality, free-to-use dataset containing samples that require no attribution and have an open license.
All of the images were captioned in a cluster containing:
- 38x 3090 24G
- 6x 4090 24G
- 8x A5000 24G
- 2x A100 80G
- A couple volunteers running a 3090 or 4090.
The model was running in fp8 precision using 🤗Transformers and 🤗Accelerate for easy multi-GPU captioning.
The captioning was spread across 10 different systems, at a GPU rental cost of approx. $350 USD.
### General Information
- **Dataset Name**: Photo Concept bucket
- **Size**: 567,597 entries
- **Columns**: 18
- **Memory Usage**: Approximately 78.0 MB
- **Creator**: pseudoterminalx
### Column Descriptions
- **id**: The original Unique identifier for each photo (integer).
- This may be used to map the images back to their original, should any of the URL formats change.
- **class_label**: Classification label for the photo (string).
- These were the search term that resulted in the image being captured.
- **type**: Type of image (e.g., photo, digital art) (string).
- **slug**: A slug that points to this image. Maybe sometimes descriptive. (string).
- **description**: Author-provided description of the photo. Many values are missing, some contain spam. (string).
- **alt**: Alternative text for the photo, seemingly an auto-generated caption. Not very high quality. (string).
- **created_at**: Timestamp when the photo was uploaded. (string).
- **title**: Author-provided title of the photo (string, some missing values).
- **location**: Location of the author, does not necessarily represent the location of the photo - though, many times, it does. (string, many missing values).
- **tags**: Tags associated with the photo (string).
- These seem to contain a lot of information, but they're not very accurate.
- **main_color**: The dominant color in the photo (string).
- **colors**: List of colors identified in the photo (string).
- **width**: Width of the photo in pixels (integer).
- **height**: Height of the photo in pixels (integer).
- **aspect_ratio**: Aspect ratio of the photo (float).
- **url**: URL to the photo (string).
- **megapixels**: Megapixels of the photo (float).
- **cogvlm_caption**: A CogVLM (fp8) caption derived from the query 'Caption this image as accurately as possible, without speculation. Describe what you see.' (string)
### Statistics
- **id**: Range from 474 to 20,329,130 with an average of 13,679,720.
- **Width**: Photos range in width from 684 to 24,538 pixels, with an average width of 4,393 pixels.
- **Height**: Photos range in height from 363 to 26,220 pixels, with an average height of 4,658 pixels.
- **Aspect Ratio**: Ranges from 0.228 to 4.928, with an average aspect ratio of approximately 1.016.
- **Megapixels**: The dataset contains photos ranging from 0.54 to 536.8604 megapixels, with an average of 20.763 megapixels.
### Usage Examples
This dataset can be used for a variety of machine learning tasks, including image classification, object detection, and color analysis. Users should take note of the high variability in image dimensions and the sparsity of the `description` and `location` columns.
### Known Issues
- The `description` column has a significant number of missing values, which may limit its use for tasks requiring detailed textual information about the images.
- There is variability in the presence of `title` and `location` information, with several entries missing these details.
- The `tags` column contains a lot of noise, which may damage models that rely on these for tasks involving image classification or generation.
---
This dataset card provides an overview of the dataset's structure, content, and some basic statistics. Depending on your specific use case or research needs, you may want to expand certain sections with additional details or examples. |
huggingnft/theshiboshis | ---
tags:
- huggingnft
- nft
- huggan
- gan
- image
- images
task:
- unconditional-image-generation
datasets:
- huggingnft/theshiboshis
license: mit
---
# Dataset Card
## Disclaimer
All rights belong to their owners.
Models and datasets can be removed from the site at the request of the copyright holder.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingnft](https://github.com/AlekseyKorshuk/huggingnft)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
NFT images dataset for unconditional generation.
NFT collection available [here](https://opensea.io/collection/theshiboshis).
Model is available [here](https://huggingface.co/huggingnft/theshiboshis).
Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingnft/theshiboshis")
```
## Dataset Structure
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
- `image`: an `image` feature.
- `id`: an `int` feature.
- `token_metadata`: a `str` feature.
- `image_original_url`: a `str` feature.
### Data Splits
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingnft,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingnft)
|
open-llm-leaderboard/details_facebook__opt-2.7b | ---
pretty_name: Evaluation run of facebook/opt-2.7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_facebook__opt-2.7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T03:26:05.209079](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__opt-2.7b/blob/main/results_2023-10-19T03-26-05.209079.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\
\ \"em_stderr\": 0.0003314581465219369,\n \"f1\": 0.04767407718120815,\n\
\ \"f1_stderr\": 0.0011986644527763738,\n \"acc\": 0.31092412335527203,\n\
\ \"acc_stderr\": 0.007478442861762106\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.0003314581465219369,\n\
\ \"f1\": 0.04767407718120815,\n \"f1_stderr\": 0.0011986644527763738\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.002274450341167551,\n \
\ \"acc_stderr\": 0.0013121578148673927\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6195737963693765,\n \"acc_stderr\": 0.01364472790865682\n\
\ }\n}\n```"
repo_url: https://huggingface.co/facebook/opt-2.7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T03_26_05.209079
path:
- '**/details_harness|drop|3_2023-10-19T03-26-05.209079.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T03-26-05.209079.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T03_26_05.209079
path:
- '**/details_harness|gsm8k|5_2023-10-19T03-26-05.209079.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T03-26-05.209079.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:25:28.050181.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:25:28.050181.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:25:28.050181.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T03_26_05.209079
path:
- '**/details_harness|winogrande|5_2023-10-19T03-26-05.209079.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T03-26-05.209079.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_25_28.050181
path:
- results_2023-07-19T16:25:28.050181.parquet
- split: 2023_10_19T03_26_05.209079
path:
- results_2023-10-19T03-26-05.209079.parquet
- split: latest
path:
- results_2023-10-19T03-26-05.209079.parquet
---
# Dataset Card for Evaluation run of facebook/opt-2.7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/facebook/opt-2.7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_facebook__opt-2.7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T03:26:05.209079](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__opt-2.7b/blob/main/results_2023-10-19T03-26-05.209079.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219369,
"f1": 0.04767407718120815,
"f1_stderr": 0.0011986644527763738,
"acc": 0.31092412335527203,
"acc_stderr": 0.007478442861762106
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219369,
"f1": 0.04767407718120815,
"f1_stderr": 0.0011986644527763738
},
"harness|gsm8k|5": {
"acc": 0.002274450341167551,
"acc_stderr": 0.0013121578148673927
},
"harness|winogrande|5": {
"acc": 0.6195737963693765,
"acc_stderr": 0.01364472790865682
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
fun1021183/cvt1_GS3_test4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 194977374.302
num_examples: 1257
- name: test
num_bytes: 354176115.15
num_examples: 2221
download_size: 548038875
dataset_size: 549153489.4519999
---
# Dataset Card for "cvt1_GS3_test4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
baptistecolle/sam-controlnet-4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
sequence: int64
- name: filename
dtype: string
- name: imgid
dtype: int64
- name: split
dtype: string
- name: sentences
struct:
- name: imgid
dtype: int64
- name: raw
dtype: string
- name: sentid
dtype: int64
- name: tokens
sequence: string
- name: cocoid
dtype: int64
- name: masks
sequence:
sequence:
sequence: bool
- name: scores
sequence: float32
splits:
- name: train
num_bytes: 115970746.0
num_examples: 41
download_size: 6382710
dataset_size: 115970746.0
---
# Dataset Card for "sam-controlnet-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabrielsantosrv/pracegover | ---
language:
- pt
---
# Dataset Card for #PraCegoVer
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Dataset repository:** [#PraCegoVer dataset](https://doi.org/10.5281/zenodo.5710562)
- **Github repository:** [PraCegoVer](https://github.com/larocs/PraCegoVer)
- **Paper:** [#PraCegoVer: A Large Dataset for Image Captioning in Portuguese](https://doi.org/10.3390/data7020013)
- **Contact:** [Gabriel Oliveira](g194760@dac.unicamp.br)
### Dataset Summary
\#PraCegoVer is a multi-modal dataset with Portuguese captions based on posts from Instagram. It is the first large dataset for image captioning in Portuguese with freely annotated images. The dataset has been created to alleviate the lack of datasets with Portuguese captions for visual-linguistic tasks.
\#PraCegoVer comprehends 533,523 instances that represent public posts collected from Instagram tagged with #PraCegoVer. the data were collected from more than 14 thousand different profiles.
This dataset contains images of people, and it consists of data collected from public profiles on Instagram. Thus, the images and raw captions might contain sensitive data that reveal racial or ethnic origins, sexual orientations, and religious beliefs. Hence, under Brazilian Law No. 13,709, to avoid the unintended use of our dataset, we decided to restrict its access, ensuring that the dataset will be used for **research purposes only**.
### Supported Tasks and Leaderboards
- `image-captioning`, `image-to-text`: the dataset can be used to train models for Image Captioning, which consists in generating a short description of the visual content of a given image. The model performance is typically measured using [ROUGE](https://huggingface.co/metrics/rouge), [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor), [**CIDEr-D**](https://openaccess.thecvf.com/content_cvpr_2015/html/Vedantam_CIDEr_Consensus-Based_Image_2015_CVPR_paper.html), [**CIDEr-R**](https://aclanthology.org/2021.wnut-1.39/) and [**SPICE**](https://link.springer.com/chapter/10.1007/978-3-319-46454-1_24).
### Languages
The captions in this dataset are in Brazilian Portuguese (pt-BR).
## Dataset Structure
### Data Instances
\#PraCegoVer dataset is composed of the main file `dataset.json` and a directory with images, `images`. The instances in `dataset.json` have the following format:
```
{'user': '16247e952a987935792d1d9d937eeb8413e0367cfb9c5e640db1d1bc4a58dc01',
'filename': 'i-00518416.jpg',
'raw_caption': 'Com mais de 12 milhões de habitantes 👨🏾👩🏼🦰, #SãoPaulo é a maior e mais populosa cidade do Brasil 🇧🇷 , além de ser a primeira metrópole da América e do hemisfério sul.\n\nSe você mora nessa cidade incrível, comente um ❤️ nesta imagem.\n\n📸Bruno Mancini\n\n#MaisSegurosJuntos #Segurança #Aplicativo #Metrópole #Brasil\n\n#PraCegoVer #PraTodosVerem: Foto do Rio Pinheiros, em São Paulo, mostrando a Ponte Estaiada e vários prédios dos dois lados do rio. Com um tom azulado a imagem possui quatro i´s transparentes bem suaves cobrindo-a toda',
'caption': 'Foto do Rio Pinheiros, em São Paulo, mostrando a Ponte Estaiada e vários prédios dos dois lados do rio. Com um tom azulado a imagem possui quatro i´s transparentes bem suaves cobrindo-a toda.',
'date': '25-09-2020'},
```
### Data Fields
user: anonymized user that created the post;
filename: image file name, which indicates the image in the `images` directory;
raw_caption: raw caption;
caption: clean caption;
date: post date.
### Data Splits
This dataset comes with two specified train/validation/test splits, one for #PraCegoVer-63K (train/validation/test: 37,881/12,442/12,612) and another for #PraCegoVer-173K (train/validation/test: 104,004/34,452/34,882). These splits are subsets of the whole dataset.
## Dataset Creation
### Curation Rationale
Automatically describing images using natural sentences is an essential task to visually impaired people's inclusion on the Internet. Although there are many datasets in the literature, most of them contain only English captions, whereas datasets with captions described in other languages are scarce.
Then, inspired by the movement [PraCegoVer](https://mwpt.com.br/criadora-do-projeto-pracegover-incentiva-descricao-de-imagens-na-web/), #PraCegoVer dataset has been created to provide images annotated with descriptions in Portuguese for the image captioning task. With this dataset, we aim to alleviate the lack of datasets with Portuguese captions for visual-linguistic tasks.
### Source Data
#### Initial Data Collection and Normalization
The data were collected from posts on Instagram that tagged #PraCegoVer. Then, descriptions are extracted from the raw image caption by using regular expressions. The script to download more data is available in the [\#PraCegoVer repository](https://github.com/larocs/PraCegoVer). We collected the data on a daily basis from 2020 to 2021, but the posts can have been created at any time before this period.
#### Who are the source language producers?
The language producers are Instagram users that post images tagging #PraCegoVer.
#### Who are the annotators?
The Instagram users that tag \#PraCegoVer spontaneously add a short description of the image content in their posts.
### Personal and Sensitive Information
The usernames were anonymized in order to make it difficult to directly identify the individuals. However, we don't anonymize the remaining data, thus the individuals present in the images can be identified. Moreover, the images and raw captions might contain data revealing racial or ethnic origins, sexual orientations, religious beliefs, political opinions, or union memberships.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better image captioning models in Portuguese. Such models are essential to help the inclusion of visually impaired people on the Internet, making it more inclusive and democratic.
However, it is worth noting that the data were collected from public profiles on Instagram, and were not thoroughly validated. Thus, there might be more examples of offensive and insulting content, albeit an exploratory analysis shows that although there exist words that can be offensive they are insignificant because they occur rarely. Moreover, the images and raw captions might contain data revealing racial or ethnic origins, sexual orientations, religious beliefs, political opinions, or union memberships.
### Discussion of Biases
We collected the data from public posts on Instagram. Thus, the data is susceptible to the bias of its algorithm and stereotypes. We conducted an initial analysis of the bias within our dataset. [Figure 20](https://www.mdpi.com/data/data-07-00013/article_deploy/html/images/data-07-00013-g020-550.jpg) from the dataset's paper shows that women are frequently associated with beauty, cosmetic products, and domestic violence. Moreover, black women co-occur more often with terms such as "racism", "discrimination", "prejudice"and "consciousness", whereas white women appear with "spa", "hair", and "lipstick", and indigenous women are mostly associated with beauty products. Similarly, black men frequently appear together with the terms "Zumbi dos Palmares", "consciousness", "racism", "United States"and "justice", while white men are associated with "theatre", "wage", "benefit"and "social security". In addition, [Table 4](https://www.mdpi.com/2306-5729/7/2/13/htm) from the dataset's paper shows that women are more frequently associated with physical words (e.g., thin, fat); still, fat people appear more frequently than thin people. [Figure 21](https://www.mdpi.com/data/data-07-00013/article_deploy/html/images/data-07-00013-g021-550.jpg) from the dataset's paper illustrates that fat women are also related to swearing words, "mental harassment", and "boss", while thin women are associated with "vitamin", "fruits", and "healthy skin". To sum up, depending on the usage of this dataset, future users may take these aspects into account.
## Additional Information
### Dataset Curators
The dataset was created by [Gabriel Oliveira](https://orcid.org/0000-0003-2835-1331), [Esther Colombini](https://orcid.org/0000-0003-0467-3133) and [Sandra Avila](https://orcid.org/0000-0001-9068-938X).
### Citation Information
If you use \#PraCegoVer dataset, please cite as:
```
@article{pracegover2022,
AUTHOR = {dos Santos, Gabriel Oliveira and Colombini, Esther Luna and Avila, Sandra},
TITLE = {#PraCegoVer: A Large Dataset for Image Captioning in Portuguese},
JOURNAL = {Data},
VOLUME = {7},
YEAR = {2022},
NUMBER = {2},
ARTICLE-NUMBER = {13},
URL = {https://www.mdpi.com/2306-5729/7/2/13},
ISSN = {2306-5729},
DOI = {10.3390/data7020013}
}
``` |
Alex-Song/Test2 | ---
license: apache-2.0
extra_gated_prompt: |
The MultiSpeech dataset is available to download for non-commercial purposes under the CC BY-NC-ND 4.0 International License. MultiSpeech doesn't own the copyright of the audios, the copyright remains with the original owners of the video or audio, and the public URL is given for the original video or audio.
The "Researcher" has requested permission to use the MultiSpeech database (the "Database"). In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
- Researcher shall use the Database only for non-commercial research and educational purposes.
- The authors make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
- Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the authors of MultiSpeech, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.
- Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
- The authors reserve the right to terminate Researcher's access to the Database at any time.
- If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
extra_gated_fields:
Name: text
Email: text
Organization: text
Address: text
I agree to not attempt to determine the identity of speakers in this dataset: checkbox
I accept the terms of access: checkbox
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_58 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1310112356.0
num_examples: 255283
download_size: 1334383708
dataset_size: 1310112356.0
---
# Dataset Card for "chunk_58"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ayushhk/dementiacare_btp | ---
license: mit
---
|
sudeshna84/mkb_hi_bn | ---
dataset_info:
features:
- name: hi
dtype: string
- name: bn
dtype: string
splits:
- name: train
num_bytes: 3600.5454545454545
num_examples: 7
- name: test
num_bytes: 2057.4545454545455
num_examples: 4
download_size: 11344
dataset_size: 5658.0
---
# Dataset Card for "mkb_hi_bn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Crystalcareai/Self-Discover-MM-Instruct-Alpaca | ---
license: apache-2.0
---
|
Yunij/resnet_embeddings | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: file_path
dtype: string
- name: label
dtype: int64
- name: mag_spectrum_vec
dtype: string
- name: resnet_embeds
dtype: string
splits:
- name: train
num_bytes: 3265511311
num_examples: 300000
download_size: 1576995154
dataset_size: 3265511311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
luizlzg/prefeitura_rj | ---
configs:
- config_name: default
data_files:
- split: train
path: prefeitura_treino*
- split: test
path: prefeitura_teste*
--- |
anonymoussubmissions/switchboard-ner-non-normalized | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: labels
sequence: string
- name: labels_orig
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-CARDINAL
'2': B-DATE
'3': B-EVENT
'4': B-FAC
'5': B-GPE
'6': B-LANGUAGE
'7': B-LAW
'8': B-LOC
'9': B-MONEY
'10': B-NORP
'11': B-ORDINAL
'12': B-ORG
'13': B-PERCENT
'14': B-PERSON
'15': B-PRODUCT
'16': B-QUANTITY
'17': B-TIME
'18': B-WORK_OF_ART
'19': I-CARDINAL
'20': I-DATE
'21': I-EVENT
'22': I-FAC
'23': I-GPE
'24': I-LANGUAGE
'25': I-LAW
'26': I-LOC
'27': I-MONEY
'28': I-NORP
'29': I-ORDINAL
'30': I-ORG
'31': I-PERCENT
'32': I-PERSON
'33': I-PRODUCT
'34': I-QUANTITY
'35': I-TIME
'36': I-WORK_OF_ART
- name: swbd_id
dtype: string
- name: swne_sentence_no
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2309183
num_examples: 7713
- name: validation
num_bytes: 1539969
num_examples: 5143
- name: test
num_bytes: 2577677
num_examples: 8571
download_size: 913903
dataset_size: 6426829
---
# Dataset Card for "switchboard-ner-non-normalized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Urdu_Conversational_Speech_Data_by_Telephone | ---
language:
- ur
task_categories:
- conversational
---
---
# Dataset Card for Nexdata/Urdu_Conversational_Speech_Data_by_Telephone
## Description
The 196 Hours - Urdu Conversational Speech Data collected by telephone involved 270 native speakers, developed with proper balance of gender ratio, Speakers would choose a few familiar topics out of the given list and start conversations to ensure dialogues' fluency and naturalness. The recording devices are various mobile phones. The audio format is 8kHz, 8bit, WAV, and all the speech data was recorded in quiet indoor environments. All the speech audio was manually transcribed with text content, the start and end time of each effective sentence, and speaker identification.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1242?source=Huggingface
# Specifications
## Format
8kHz, 8bit, u-law/a-law pcm, mono channel;
## Recording Environment
quiet indoor environment, without echo;
## Recording content
dozens of topics are specified, and the speakers make dialogue under those topics while the recording is performed;
## Demographics
270 speakers totally, with 56% male and 44% female;.
## Annotation
annotating for the transcription text, speaker identification and gender
## Device
Telephony recording system;
## Language
Urdu
## Application scenarios
speech recognition; voiceprint recognition;
## Accuracy rate
the word accuracy rate is not less than 95%
# Licensing Information
Commercial License |
tyzhu/squad_qa_rare_v5_full_random_permute_1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4201825.944244605
num_examples: 2875
- name: validation
num_bytes: 345326
num_examples: 300
download_size: 1172281
dataset_size: 4547151.944244605
---
# Dataset Card for "squad_qa_rare_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arieg/bw_spec_cls_80_34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '75928'
'1': '75929'
'2': '75930'
'3': '75931'
'4': '75932'
'5': '75933'
'6': '75935'
'7': '75936'
'8': '75937'
'9': '75975'
'10': '76036'
'11': '76069'
'12': '76071'
'13': '76072'
'14': '76073'
'15': '76074'
'16': '76075'
'17': '76076'
'18': '76077'
'19': '76078'
'20': '76079'
'21': '76121'
'22': '76375'
'23': '76381'
'24': '76437'
'25': '76440'
'26': '76654'
'27': '76659'
'28': '77517'
'29': '77519'
'30': '77521'
'31': '77522'
'32': '77523'
'33': '77564'
'34': '77571'
'35': '77572'
'36': '77952'
'37': '78038'
'38': '78156'
'39': '78213'
'40': '78516'
'41': '78833'
'42': '78834'
'43': '78839'
'44': '78841'
'45': '78843'
'46': '78845'
'47': '78847'
'48': '78848'
'49': '78849'
'50': '78850'
'51': '78851'
'52': '78852'
'53': '78984'
'54': '78998'
'55': '79087'
'56': '79575'
'57': '79593'
'58': '79605'
'59': '79606'
'60': '79610'
'61': '79616'
'62': '79741'
'63': '79973'
'64': '79975'
'65': '79977'
'66': '79978'
'67': '79985'
'68': '79986'
'69': '79988'
'70': '79990'
'71': '79995'
'72': '80035'
'73': '80293'
'74': '80341'
'75': '80351'
'76': '80389'
'77': '80402'
'78': '80515'
'79': '80516'
splits:
- name: train
num_bytes: 88501139.2
num_examples: 1600
- name: test
num_bytes: 21775350.0
num_examples: 400
download_size: 109195616
dataset_size: 110276489.2
---
# Dataset Card for "bw_spec_cls_80_34"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_allknowingroger__MistralQ-7B-slerp | ---
pretty_name: Evaluation run of allknowingroger/MistralQ-7B-slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allknowingroger/MistralQ-7B-slerp](https://huggingface.co/allknowingroger/MistralQ-7B-slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_allknowingroger__MistralQ-7B-slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-11T04:54:25.683367](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__MistralQ-7B-slerp/blob/main/results_2024-04-11T04-54-25.683367.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6151669672879644,\n\
\ \"acc_stderr\": 0.03292809773418525,\n \"acc_norm\": 0.6215423973274969,\n\
\ \"acc_norm_stderr\": 0.03360025978446776,\n \"mc1\": 0.26560587515299877,\n\
\ \"mc1_stderr\": 0.015461027627253592,\n \"mc2\": 0.39319507774571977,\n\
\ \"mc2_stderr\": 0.014813613143803141\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5776450511945392,\n \"acc_stderr\": 0.014434138713379984,\n\
\ \"acc_norm\": 0.6237201365187713,\n \"acc_norm_stderr\": 0.014157022555407154\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6552479585739892,\n\
\ \"acc_stderr\": 0.004743160034271149,\n \"acc_norm\": 0.8466440948018323,\n\
\ \"acc_norm_stderr\": 0.0035959381241662124\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421296,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421296\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6566037735849056,\n \"acc_stderr\": 0.029224526469124792,\n \
\ \"acc_norm\": 0.6566037735849056,\n \"acc_norm_stderr\": 0.029224526469124792\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n\
\ \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n\
\ \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.6416184971098265,\n\
\ \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.049598599663841815,\n\
\ \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.049598599663841815\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.032555253593403555,\n\
\ \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.032555253593403555\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n\
\ \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n\
\ \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4074074074074074,\n \"acc_stderr\": 0.025305906241590632,\n \"\
acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.025305906241590632\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6967741935483871,\n\
\ \"acc_stderr\": 0.02614868593067175,\n \"acc_norm\": 0.6967741935483871,\n\
\ \"acc_norm_stderr\": 0.02614868593067175\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198906,\n \"\
acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198906\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015178,\n\
\ \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015178\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.02469721693087894,\n \
\ \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.02469721693087894\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.02925290592725198,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.02925290592725198\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552378,\n \
\ \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552378\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8146788990825689,\n \"acc_stderr\": 0.016659279700295838,\n \"\
acc_norm\": 0.8146788990825689,\n \"acc_norm_stderr\": 0.016659279700295838\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"\
acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \
\ \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n\
\ \"acc_stderr\": 0.032100621541349864,\n \"acc_norm\": 0.6457399103139013,\n\
\ \"acc_norm_stderr\": 0.032100621541349864\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"\
acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n\
\ \"acc_stderr\": 0.023636873317489298,\n \"acc_norm\": 0.8461538461538461,\n\
\ \"acc_norm_stderr\": 0.023636873317489298\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7726692209450831,\n\
\ \"acc_stderr\": 0.01498727064094601,\n \"acc_norm\": 0.7726692209450831,\n\
\ \"acc_norm_stderr\": 0.01498727064094601\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6791907514450867,\n \"acc_stderr\": 0.025131000233647893,\n\
\ \"acc_norm\": 0.6791907514450867,\n \"acc_norm_stderr\": 0.025131000233647893\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3642458100558659,\n\
\ \"acc_stderr\": 0.0160943387684746,\n \"acc_norm\": 0.3642458100558659,\n\
\ \"acc_norm_stderr\": 0.0160943387684746\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n\
\ \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.02604176620271716,\n\
\ \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.02604176620271716\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \
\ \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43089960886571055,\n\
\ \"acc_stderr\": 0.012647695889547231,\n \"acc_norm\": 0.43089960886571055,\n\
\ \"acc_norm_stderr\": 0.012647695889547231\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n\
\ \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6437908496732027,\n \"acc_stderr\": 0.019373332420724504,\n \
\ \"acc_norm\": 0.6437908496732027,\n \"acc_norm_stderr\": 0.019373332420724504\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7020408163265306,\n \"acc_stderr\": 0.029279567411065674,\n\
\ \"acc_norm\": 0.7020408163265306,\n \"acc_norm_stderr\": 0.029279567411065674\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.746268656716418,\n\
\ \"acc_stderr\": 0.030769444967296014,\n \"acc_norm\": 0.746268656716418,\n\
\ \"acc_norm_stderr\": 0.030769444967296014\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.0312678171466318,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.0312678171466318\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26560587515299877,\n\
\ \"mc1_stderr\": 0.015461027627253592,\n \"mc2\": 0.39319507774571977,\n\
\ \"mc2_stderr\": 0.014813613143803141\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856544\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3244882486732373,\n \
\ \"acc_stderr\": 0.012896095359768106\n }\n}\n```"
repo_url: https://huggingface.co/allknowingroger/MistralQ-7B-slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|arc:challenge|25_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|gsm8k|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hellaswag|10_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-54-25.683367.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-11T04-54-25.683367.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- '**/details_harness|winogrande|5_2024-04-11T04-54-25.683367.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-11T04-54-25.683367.parquet'
- config_name: results
data_files:
- split: 2024_04_11T04_54_25.683367
path:
- results_2024-04-11T04-54-25.683367.parquet
- split: latest
path:
- results_2024-04-11T04-54-25.683367.parquet
---
# Dataset Card for Evaluation run of allknowingroger/MistralQ-7B-slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allknowingroger/MistralQ-7B-slerp](https://huggingface.co/allknowingroger/MistralQ-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_allknowingroger__MistralQ-7B-slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-11T04:54:25.683367](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__MistralQ-7B-slerp/blob/main/results_2024-04-11T04-54-25.683367.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6151669672879644,
"acc_stderr": 0.03292809773418525,
"acc_norm": 0.6215423973274969,
"acc_norm_stderr": 0.03360025978446776,
"mc1": 0.26560587515299877,
"mc1_stderr": 0.015461027627253592,
"mc2": 0.39319507774571977,
"mc2_stderr": 0.014813613143803141
},
"harness|arc:challenge|25": {
"acc": 0.5776450511945392,
"acc_stderr": 0.014434138713379984,
"acc_norm": 0.6237201365187713,
"acc_norm_stderr": 0.014157022555407154
},
"harness|hellaswag|10": {
"acc": 0.6552479585739892,
"acc_stderr": 0.004743160034271149,
"acc_norm": 0.8466440948018323,
"acc_norm_stderr": 0.0035959381241662124
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421296,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421296
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6566037735849056,
"acc_stderr": 0.029224526469124792,
"acc_norm": 0.6566037735849056,
"acc_norm_stderr": 0.029224526469124792
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.46078431372549017,
"acc_stderr": 0.049598599663841815,
"acc_norm": 0.46078431372549017,
"acc_norm_stderr": 0.049598599663841815
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.032555253593403555,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.032555253593403555
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.025305906241590632,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.025305906241590632
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6967741935483871,
"acc_stderr": 0.02614868593067175,
"acc_norm": 0.6967741935483871,
"acc_norm_stderr": 0.02614868593067175
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198906,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198906
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015178,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015178
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6128205128205129,
"acc_stderr": 0.02469721693087894,
"acc_norm": 0.6128205128205129,
"acc_norm_stderr": 0.02469721693087894
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.02925290592725198,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.02925290592725198
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552378,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552378
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8146788990825689,
"acc_stderr": 0.016659279700295838,
"acc_norm": 0.8146788990825689,
"acc_norm_stderr": 0.016659279700295838
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.47685185185185186,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6457399103139013,
"acc_stderr": 0.032100621541349864,
"acc_norm": 0.6457399103139013,
"acc_norm_stderr": 0.032100621541349864
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.023636873317489298,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.023636873317489298
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7726692209450831,
"acc_stderr": 0.01498727064094601,
"acc_norm": 0.7726692209450831,
"acc_norm_stderr": 0.01498727064094601
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6791907514450867,
"acc_stderr": 0.025131000233647893,
"acc_norm": 0.6791907514450867,
"acc_norm_stderr": 0.025131000233647893
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3642458100558659,
"acc_stderr": 0.0160943387684746,
"acc_norm": 0.3642458100558659,
"acc_norm_stderr": 0.0160943387684746
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.02604176620271716,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.02604176620271716
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.029752389657427047,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.029752389657427047
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.43089960886571055,
"acc_stderr": 0.012647695889547231,
"acc_norm": 0.43089960886571055,
"acc_norm_stderr": 0.012647695889547231
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6323529411764706,
"acc_stderr": 0.029289413409403192,
"acc_norm": 0.6323529411764706,
"acc_norm_stderr": 0.029289413409403192
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6437908496732027,
"acc_stderr": 0.019373332420724504,
"acc_norm": 0.6437908496732027,
"acc_norm_stderr": 0.019373332420724504
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7020408163265306,
"acc_stderr": 0.029279567411065674,
"acc_norm": 0.7020408163265306,
"acc_norm_stderr": 0.029279567411065674
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.746268656716418,
"acc_stderr": 0.030769444967296014,
"acc_norm": 0.746268656716418,
"acc_norm_stderr": 0.030769444967296014
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.26560587515299877,
"mc1_stderr": 0.015461027627253592,
"mc2": 0.39319507774571977,
"mc2_stderr": 0.014813613143803141
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856544
},
"harness|gsm8k|5": {
"acc": 0.3244882486732373,
"acc_stderr": 0.012896095359768106
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
DurreSudoku/dummy_image_dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Electronic
'1': Pop
splits:
- name: train
num_bytes: 185947.0
num_examples: 3
- name: validation
num_bytes: 126064.0
num_examples: 2
- name: test
num_bytes: 196391.0
num_examples: 3
download_size: 452851
dataset_size: 508402.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
DSSGxMunich/bp_bavaria_content | ---
license: mit
---
|
bigscience-data/roots_indic-gu_mkb | ---
language: gu
license: cc-by-sa-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-gu_mkb
# mkb
- Dataset uid: `mkb`
### Description
The Prime Ministers speeches - Mann Ki Baat, on All India Radio, translated into many languages.
### Homepage
- https://huggingface.co/datasets/mkb
- http://preon.iiit.ac.in/~jerin/bhasha/
### Licensing
### Speaker Locations
### Sizes
- 0.0009 % of total
- 0.0174 % of indic-ta
- 0.0252 % of indic-ml
- 0.0416 % of indic-mr
- 0.0601 % of indic-gu
- 0.0047 % of indic-bn
- 0.0040 % of indic-hi
- 0.0185 % of indic-te
- 0.0162 % of indic-or
- 0.0026 % of indic-ur
### BigScience processing steps
#### Filters applied to: indic-ta
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: indic-ur
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
irds/mr-tydi_ru_train | ---
pretty_name: '`mr-tydi/ru/train`'
viewer: false
source_datasets: ['irds/mr-tydi_ru']
task_categories:
- text-retrieval
---
# Dataset Card for `mr-tydi/ru/train`
The `mr-tydi/ru/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ru/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=5,366
- `qrels`: (relevance assessments); count=5,366
- For `docs`, use [`irds/mr-tydi_ru`](https://huggingface.co/datasets/irds/mr-tydi_ru)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mr-tydi_ru_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mr-tydi_ru_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Zhang2021MrTyDi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{Clark2020TyDiQa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki},
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
```
|
ibragim-bad/hs_multilang | ---
dataset_info:
- config_name: ar
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 15026500
num_examples: 9176
download_size: 7468005
dataset_size: 15026500
- config_name: de
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 12344284
num_examples: 9368
download_size: 7095322
dataset_size: 12344284
- config_name: es
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 11630674
num_examples: 9374
download_size: 6725858
dataset_size: 11630674
- config_name: fr
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 12527721
num_examples: 9338
download_size: 7040656
dataset_size: 12527721
- config_name: he
features:
- name: ind
dtype: int64
- name: ctx
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: endings
sequence: string
- name: activity_label
dtype: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 11346822
num_examples: 8355
download_size: 5155175
dataset_size: 11346822
- config_name: it
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 11458511
num_examples: 9193
download_size: 6651885
dataset_size: 11458511
- config_name: ru
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 18603749
num_examples: 9272
download_size: 9065335
dataset_size: 18603749
configs:
- config_name: ar
data_files:
- split: validation
path: ar/validation-*
- config_name: de
data_files:
- split: validation
path: de/validation-*
- config_name: es
data_files:
- split: validation
path: es/validation-*
- config_name: fr
data_files:
- split: validation
path: fr/validation-*
- config_name: he
data_files:
- split: validation
path: he/validation-*
- config_name: it
data_files:
- split: validation
path: it/validation-*
- config_name: ru
data_files:
- split: validation
path: ru/validation-*
---
|
vg055/analisis-sentimientos-textos-turisitcos-mx-polaridad-DataAugmentationV1 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 99751382
num_examples: 243912
- name: test
num_bytes: 10317131
num_examples: 25171
download_size: 67444651
dataset_size: 110068513
---
# Dataset Card for "analisis-sentimientos-textos-turisitcos-mx-polaridad-DataAugmentationV1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Technewvision/TheTechHub | ---
license: unknown
---
|
nianlong/long-doc-extractive-summarization-truncated-pubmed | ---
license: apache-2.0
---
|
PaDaS-Lab/gdpr-compliant-ner | ---
license: mit
language:
- en
---
This dataset consists of privacy policies from 44 online platforms. These policies are annotated to comply with the GDPR guidelines.
The policies are manually annotated with NER tags, which highlight relevant entities of GDPR privacy policies, such as data controllers, data sources, authority, etc.
*Statistics*:
| **Label** | **Frequency** | **Percentage** |
|-----------|---------------|----------------|
| PD | 4200 | 23.24% |
| P | 2909 | 16.09% |
| RP | 1745 | 9.65% |
| DC | 1559 | 8.62% |
| NPD | 955 | 5.28% |
| TP | 942 | 5.21% |
| CONS | 686 | 3.79% |
| TM | 648 | 3.58% |
| R | 585 | 3.24% |
| DS | 510 | 2.82% |
| LB | 419 | 2.32% |
| DSO | 408 | 2.26% |
| OM | 386 | 2.14% |
| LI | 306 | 1.69% |
| RET | 291 | 1.61% |
| SNEU | 246 | 1.36% |
| RI | 221 | 1.22% |
| DP | 143 | 0.79% |
| CONT | 129 | 0.71% |
| A | 124 | 0.69% |
| ADM | 109 | 0.60% |
| SEU | 100 | 0.55% |
| DSR17 | 84 | 0.46% |
| DSR15 | 67 | 0.37% |
| DPO | 58 | 0.32% |
| DSR16 | 57 | 0.32% |
| DSR21 | 50 | 0.28% |
| NRP | 38 | 0.21% |
| DSR18 | 37 | 0.20% |
| LC | 29 | 0.16% |
| DSR20 | 29 | 0.16% |
| DSR19 | 4 | 0.02% |
| DSR22 | 2 | 0.01% |
| **Overall** | **18076** | **100.00%** |
|
Aznor/MeetingBank-original | ---
license: cc-by-nc-sa-4.0
task_categories:
- summarization
---
This dataset is the original train-validation-test split from the [MeetingBank dataset](https://meetingbank.github.io/) used to train and evaluate the summarisation models in the original paper cited below.
**Overview**
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains 1,366 meetings with over 3,579 hours of video, as well as transcripts, PDF documents of meeting minutes, agenda, and other metadata. On average, a council meeting is 2.6 hours long and its transcript contains over 28k tokens, making it a valuable testbed for meeting summarizers and for extracting structure from meeting videos. The datasets contains 6,892 segment-level summarization instances for training and evaluating of performance.
**Acknowledgement**
Please cite the following paper in work that makes use of this dataset:
[MeetingBank: A Benchmark Dataset for Meeting Summarization](https://arxiv.org/abs/2305.17529) \
Yebowen Hu, Tim Ganter, Hanieh Deilamsalehy, Franck Dernoncourt, Hassan Foroosh, Fei Liu \
In main conference of Association for Computational Linguistics (ACL’23), Toronto, Canada.
**Bibtex**
```
@inproceedings{hu-etal-2023-meetingbank,
title = "MeetingBank: A Benchmark Dataset for Meeting Summarization",
author = "Yebowen Hu and Tim Ganter and Hanieh Deilamsalehy and Franck Dernoncourt and Hassan Foroosh and Fei Liu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL)",
month = July,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
}
```
**Resources**
MeetingBank dataset will be hosted at Zenodo. The audio files of each meeting will be hosted individually on Huggingface. All resources will includes meeting audio, transcripts, meetingbank main JSON file, summaries from 6 systems and human annotations.
**Summary, Segments Transcripts and VideoList:** [zenodo](https://zenodo.org/record/7989108)
**Meeting Audios:** [HuggingFace](https://huggingface.co/datasets/huuuyeah/MeetingBank_Audio)
**Meeting Transcripts:** [HuggingFace](https://huggingface.co/datasets/lytang/MeetingBank-transcript)
Some scripts can be found in github repo [MeetingBank_Utils](https://github.com/YebowenHu/MeetingBank-utils) |
NTUYG/openeval | ---
license: apache-2.0
language:
- en
tags:
- code
--- |
griffin/incr_summ | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 24221730
num_examples: 5145
download_size: 5083058
dataset_size: 24221730
---
# Dataset Card for "incr_summ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/find_first_sent_train_30_eval_10_recite | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 111799
num_examples: 70
- name: validation
num_bytes: 18607
num_examples: 10
download_size: 0
dataset_size: 130406
---
# Dataset Card for "find_first_sent_train_30_eval_10_recite"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bvkbharadwaj/Image_dataset_test | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 207110.0
num_examples: 1
download_size: 208201
dataset_size: 207110.0
---
# Dataset Card for "Image_dataset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BangumiBase/toarukagakunorailgunt | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of To Aru Kagaku No Railgun T
This is the image base of bangumi To Aru Kagaku no Railgun T, we detected 36 characters, 3707 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 53 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 88 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 44 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 36 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 29 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 76 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 74 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 36 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 18 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 21 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 44 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 251 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 637 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 72 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 58 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 68 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 88 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 436 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 17 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 247 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 34 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 17 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 32 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 8 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 32 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 18 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 19 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 11 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 251 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 66 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 9 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 70 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 145 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 68 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 9 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 525 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
joshuapsa/gpt-generated-news-sentences | ---
dataset_info:
features:
- name: class_index
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: text
dtype: string
- name: _air
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _cybersecurity
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _domestic_unrest_violence
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _extreme_weather
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _forced_labor
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _general_biz_trend
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _later_report
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _lawsuit_legal_insurance
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _leisure_other_news
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _maritime
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _pandemics_large_scale_diseases
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _railway
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _strike
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _trade_war_embargos_bans
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _war_conflict
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: _warehouse_fire
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 266620
num_examples: 640
- name: valid
num_bytes: 33348
num_examples: 80
- name: test
num_bytes: 33277
num_examples: 80
download_size: 100323
dataset_size: 333245
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card
- This dataset was created solely for the purpose of code testing.
- This dataset was generated from prompting chatGPT to create sample pieces of news setences according to a topic.
- Sample prompt: "generate 50 sentences on the topic of "very recent breaking news on wars and conflicts events" with some sample location names. One example: "a missile struck near a residential building in Kiev last night, Russia denied Ukraine's accusations of attacking non-military targets""
- The output senteces were then used to construct huggingface dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
charlieoneill/resid_streams | ---
dataset_info:
features:
- name: data
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2118689804
num_examples: 100
download_size: 2119515837
dataset_size: 2118689804
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bandad/sayoko-tts-corpus | ---
license: cc-by-4.0
task_categories:
- text-to-speech
language:
- ja
---
# サヨ子 音声コーパス
## ダウンロード方法
データセットを圧縮したzipファイルを、[gdrive](https://drive.google.com/file/d/1StMFlDH_RcBAaAyRwEIOWuTQFetVaMUU/view?usp=sharing)に置いています。
また、以下のスクリプトで、huggingface hubからダウンロードも可能です。
```
# pip install --upgrade huggingface_hub
from huggingface_hub import snapshot_download
snapshot_download(repo_id="bandad/sayoko-tts-corpus", repo_type="dataset", revision="main", local_dir="./sayoko-tts-corpus")
```
## 概要
81歳の女性の音声コーパスです。
`wav_noise`ディレクトリが実際に収録した音声です。高齢ということで、自宅にて収録し鈴虫などによるノイズが入っています。また、リップノイズも多いです。`wav`ディレクトリは、ノイズをできるだけ除去した音声ファイルです。音声合成などのタスクには、`wav`ディレクトリ配下の音声ファイルを使用してください。
音素+韻律記号を含むラベルは、`phoneme`ディレクトリに`音声ファイル名.txt`としておいています。
音素から変換したカナ+韻律記号を含むラベルは、`kana`ディレクトリに`音声ファイル名.txt`としておいています。
アクセントに関しては、ほとんど修正していないので、誤りを含んでいますが、Text to Speechができることは、確認しました。
また、音素は、手動で修正を行っていますが、誤りがあれば連絡ください。
連絡先: bandad.kw@gmail.com
github: https://github.com/k-washi
x: https://twitter.com/kwashizzz
# 利用規約
- 無料、商用で利用可能です。
- 「Fusic サヨ子音声コーパス」と、このリポジトリの[URL](https://huggingface.co/datasets/bandad/sayoko-tts-corpus)を、クレジットとして明記してください。ただし、動画や印刷物などリンクが貼れないものはリンクなしでOKです。
表記例: `[Fusic/サヨ子音声コーパス](https://huggingface.co/datasets/bandad/sayoko-tts-corpus)`
- エロ・グロ作品などには使用しないでください。ホラーなどにはOKです。判断に迷う点があれば、ご相談ください。
- 再配布を行う場合は、本README.mdも一緒に配布してください。事後でも良いので、連絡先 or DMにてご一報ください。もし、再配布に関して、不明点があれば、ご相談ください。
- 音声ファイルなど本コーパスへの直リンクは禁止です。使用する場合は、ダウンロードし、自身のサーバーにアップロードしてそれを参照するようにアプリに組み込んでください。
- 音声関連の研究に自由に使用してください。学会発表等に使用する場合にも、事前の申請は不要です。
# 詳細
## 属性
- 女性
- 81歳
## その他
韻律記号は、以下になります。
| ラベル| 概要 |
| --- | --- |
| ^ | 文の始まり|
| $ | 文の終わり |
| _ | ポーズ |
| # | アクセント境界 |
| [ | アクセント上昇 |
| ] | アクセント核 |
| ? | 疑問|
音素の一覧
```
[
'a',
'i',
'u',
'e',
'o',
'k',
's',
't',
'n',
'h',
'm',
'y',
'r',
'w',
'g',
'z',
'd',
'p',
'b',
'ky',
'gy',
'sh',
'j',
'ch',
'ny',
'dy',
'f',
'hy',
'py',
'by',
'v',
'my',
'ry',
'cl',
'ty',
'N',
'ts',
]
``` |
heliosprime/twitter_dataset_1713025425 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 16540
num_examples: 38
download_size: 12008
dataset_size: 16540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713025425"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pythainlp/thainer-corpus-v2 | ---
dataset_info:
features:
- name: words
sequence: string
- name: ner
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': O
'3': B-ORGANIZATION
'4': B-LOCATION
'5': I-ORGANIZATION
'6': I-LOCATION
'7': B-DATE
'8': I-DATE
'9': B-TIME
'10': I-TIME
'11': B-MONEY
'12': I-MONEY
'13': B-FACILITY
'14': I-FACILITY
'15': B-URL
'16': I-URL
'17': B-PERCENT
'18': I-PERCENT
'19': B-LEN
'20': I-LEN
'21': B-AGO
'22': I-AGO
'23': B-LAW
'24': I-LAW
'25': B-PHONE
'26': I-PHONE
'27': B-EMAIL
'28': I-EMAIL
'29': B-ZIP
'30': B-TEMPERATURE
'31': I-TEMPERATURE
'32': B-DTAE
'33': I-DTAE
'34': B-DATA
'35': I-DATA
splits:
- name: train
num_bytes: 3736419
num_examples: 3938
- name: validation
num_bytes: 1214580
num_examples: 1313
- name: test
num_bytes: 1242609
num_examples: 1313
download_size: 974230
dataset_size: 6193608
license: cc-by-3.0
task_categories:
- token-classification
language:
- th
---
# Dataset Card for "thainer-corpus-v2"
## News!!!
> Thai NER v2.2 is released! Please use Thai NER 2.2 instead This corpus.
> Thai NER v2.2: [https://huggingface.co/datasets/pythainlp/thainer-corpus-v2.2](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2.2)
Thai Named Entity Recognition Corpus
Home Page: [https://pythainlp.github.io/Thai-NER/version/2](https://pythainlp.github.io/Thai-NER/version/2)
Training script and split data: [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354)
**You can download .conll to train named entity model in [https://zenodo.org/record/7761354](https://zenodo.org/record/7761354).**
**Size**
- Train: 3,938 docs
- Validation: 1,313 docs
- Test: 1,313 Docs
Some data come from crowdsourcing between Dec 2018 - Nov 2019. [https://github.com/wannaphong/thai-ner](https://github.com/wannaphong/thai-ner)
**Domain**
- News (It, politics, economy, social)
- PR (KKU news)
- general
**Source**
- I use sone data from Nutcha’s theses (http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) and improve data by rechecking and adding more tagging.
- Twitter
- Blognone.com - It news
- thaigov.go.th
- kku.ac.th
And more (the lists are lost.)
**Tag**
- DATA - date
- TIME - time
- EMAIL - email
- LEN - length
- LOCATION - Location
- ORGANIZATION - Company / Organization
- PERSON - Person name
- PHONE - phone number
- TEMPERATURE - temperature
- URL - URL
- ZIP - Zip code
- MONEY - the amount
- LAW - legislation
- PERCENT - PERCENT
Download: [HuggingFace Hub](https://huggingface.co/datasets/pythainlp/thainer-corpus-v2)
## Cite
> Wannaphong Phatthiyaphaibun. (2022). Thai NER 2.0 (2.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7761354
or BibTeX
```
@dataset{wannaphong_phatthiyaphaibun_2022_7761354,
author = {Wannaphong Phatthiyaphaibun},
title = {Thai NER 2.0},
month = sep,
year = 2022,
publisher = {Zenodo},
version = {2.0},
doi = {10.5281/zenodo.7761354},
url = {https://doi.org/10.5281/zenodo.7761354}
}
``` |
open-llm-leaderboard/details_Walmart-the-bag__Misted-7B | ---
pretty_name: Evaluation run of Walmart-the-bag/Misted-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Walmart-the-bag/Misted-7B](https://huggingface.co/Walmart-the-bag/Misted-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Walmart-the-bag__Misted-7B\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T20:09:02.176077](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Misted-7B/blob/main/results_2023-12-03T20-09-02.176077.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5959059893858984,\n\
\ \"acc_stderr\": 0.01351675297272172\n },\n \"harness|gsm8k|5\": {\n\
\ \"acc\": 0.5959059893858984,\n \"acc_stderr\": 0.01351675297272172\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Walmart-the-bag/Misted-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T20_09_02.176077
path:
- '**/details_harness|gsm8k|5_2023-12-03T20-09-02.176077.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T20-09-02.176077.parquet'
- config_name: results
data_files:
- split: 2023_12_03T20_09_02.176077
path:
- results_2023-12-03T20-09-02.176077.parquet
- split: latest
path:
- results_2023-12-03T20-09-02.176077.parquet
---
# Dataset Card for Evaluation run of Walmart-the-bag/Misted-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Walmart-the-bag/Misted-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Walmart-the-bag/Misted-7B](https://huggingface.co/Walmart-the-bag/Misted-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Walmart-the-bag__Misted-7B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T20:09:02.176077](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Misted-7B/blob/main/results_2023-12-03T20-09-02.176077.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5959059893858984,
"acc_stderr": 0.01351675297272172
},
"harness|gsm8k|5": {
"acc": 0.5959059893858984,
"acc_stderr": 0.01351675297272172
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
haebo1/test | ---
pretty_name: KoBEST
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
configs:
- config_name: boolq
data_files: "boolq/*"
- config_name: copa
data_files: "copa/*"
- config_name: hellaswag
data_files: "hellaswag/*"
- config_name: sentineg
data_files: "sentineg/*"
- config_name: wic
data_files: "wic/*"
---
# Dataset Card for KoBEST
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/SKT-LSL/KoBEST_datarepo
- **Paper:**
- **Point of Contact:** https://github.com/SKT-LSL/KoBEST_datarepo/issues
### Dataset Summary
KoBEST is a Korean benchmark suite consists of 5 natural language understanding tasks that requires advanced knowledge in Korean.
### Supported Tasks and Leaderboards
Boolean Question Answering, Choice of Plausible Alternatives, Words-in-Context, HellaSwag, Sentiment Negation Recognition
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### KB-BoolQ
An example of a data point looks as follows.
```
{'paragraph': '두아 리파(Dua Lipa, 1995년 8월 22일 ~ )는 잉글랜드의 싱어송라이터, 모델이다. BBC 사운드 오브 2016 명단에 노미닛되었다. 싱글 "Be the One"가 영국 싱글 차트 9위까지 오르는 등 성과를 보여주었다.',
'question': '두아 리파는 영국인인가?',
'label': 1}
```
#### KB-COPA
An example of a data point looks as follows.
```
{'premise': '물을 오래 끓였다.',
'question': '결과',
'alternative_1': '물의 양이 늘어났다.',
'alternative_2': '물의 양이 줄어들었다.',
'label': 1}
```
#### KB-WiC
An example of a data point looks as follows.
```
{'word': '양분',
'context_1': '토양에 [양분]이 풍부하여 나무가 잘 자란다. ',
'context_2': '태아는 모체로부터 [양분]과 산소를 공급받게 된다.',
'label': 1}
```
#### KB-HellaSwag
An example of a data point looks as follows.
```
{'context': '모자를 쓴 투수가 타자에게 온 힘을 다해 공을 던진다. 공이 타자에게 빠른 속도로 다가온다. 타자가 공을 배트로 친다. 배트에서 깡 소리가 난다. 공이 하늘 위로 날아간다.',
'ending_1': '외야수가 떨어지는 공을 글러브로 잡는다.',
'ending_2': '외야수가 공이 떨어질 위치에 자리를 잡는다.',
'ending_3': '심판이 아웃을 외친다.',
'ending_4': '외야수가 공을 따라 뛰기 시작한다.',
'label': 3}
```
#### KB-SentiNeg
An example of a data point looks as follows.
```
{'sentence': '택배사 정말 마음에 듬',
'label': 1}
```
### Data Fields
### KB-BoolQ
+ `paragraph`: a `string` feature
+ `question`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-COPA
+ `premise`: a `string` feature
+ `question`: a `string` feature
+ `alternative_1`: a `string` feature
+ `alternative_2`: a `string` feature
+ `label`: an answer candidate label, with possible values `alternative_1`(0) and `alternative_2`(1)
### KB-WiC
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-HellaSwag
+ `target_word`: a `string` feature
+ `context_1`: a `string` feature
+ `context_2`: a `string` feature
+ `label`: a classification label, with possible values `False`(0) and `True`(1)
### KB-SentiNeg
+ `sentence`: a `string` feature
+ `label`: a classification label, with possible values `Negative`(0) and `Positive`(1)
### Data Splits
#### KB-BoolQ
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-COPA
+ train: 3,076
+ dev: 1,000
+ test: 1,000
#### KB-WiC
+ train: 3,318
+ dev: 1,260
+ test: 1,260
#### KB-HellaSwag
+ train: 3,665
+ dev: 700
+ test: 1,404
#### KB-SentiNeg
+ train: 3,649
+ dev: 400
+ test: 397
+ test_originated: 397 (Corresponding training data where the test set is originated from.)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
@misc{https://doi.org/10.48550/arxiv.2204.04541,
doi = {10.48550/ARXIV.2204.04541},
url = {https://arxiv.org/abs/2204.04541},
author = {Kim, Dohyeong and Jang, Myeongjun and Kwon, Deuk Sin and Davis, Eric},
title = {KOBEST: Korean Balanced Evaluation of Significant Tasks},
publisher = {arXiv},
year = {2022},
}
```
[More Information Needed]
### Contributions
Thanks to [@MJ-Jang](https://github.com/MJ-Jang) for adding this dataset. |
monmamo/venenia-blossom | ---
license: cc
language:
- en
tags:
- art
- female
- dracquin
- anthrope
pretty_name: Venenia Blossom
size_categories:
- n<1K
--- |
CyberHarem/hiryuu_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of hiryuu/飛龍 (Kantai Collection)
This is the dataset of hiryuu/飛龍 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `short_hair, brown_hair, brown_eyes, breasts, one_side_up, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 455.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiryuu_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 303.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiryuu_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1099 | 600.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiryuu_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 420.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiryuu_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1099 | 785.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/hiryuu_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/hiryuu_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, cowboy_shot, hakama_short_skirt, solo, green_hakama, looking_at_viewer, orange_kimono, smile, open_mouth, simple_background, yellow_kimono, index_finger_raised, one-hour_drawing_challenge, white_background |
| 1 | 7 |  |  |  |  |  | 2girls, green_hakama, hakama_short_skirt, wide_sleeves, blush, simple_background, smile, white_background, open_mouth, orange_kimono, solo_focus, yellow_kimono, cowboy_shot, looking_at_viewer, long_sleeves |
| 2 | 26 |  |  |  |  |  | flight_deck, arrow_(projectile), quiver, 1girl, solo, hakama_short_skirt, wide_sleeves, yugake, green_hakama, hachimaki, single_glove, holding_bow_(weapon), brown_gloves, cowboy_shot, looking_at_viewer, yellow_kimono |
| 3 | 5 |  |  |  |  |  | 1girl, arrow_(projectile), bow_(weapon), japanese_clothes, kyuudou, single_glove, skirt, solo, yugake, flight_deck, looking_at_viewer, quiver, open_mouth, wide_sleeves |
| 4 | 6 |  |  |  |  |  | 1girl, simple_background, solo, upper_body, looking_at_viewer, necklace, official_alternate_costume, open_mouth, ribbed_sweater, smile, collarbone, long_sleeves, twitter_username, white_background, yellow_sweater |
| 5 | 14 |  |  |  |  |  | 1girl, green_pants, official_alternate_costume, solo, jacket_around_waist, orange_sweater, ribbed_sweater, smile, looking_at_viewer, necklace, long_sleeves, open_mouth, plaid_jacket, simple_background, yellow_sweater, bag, blush, cowboy_shot, white_background |
| 6 | 7 |  |  |  |  |  | 1girl, alternate_costume, employee_uniform, looking_at_viewer, smile, solo, red_skirt, name_tag, open_mouth, blush, cowboy_shot, short_sleeves, simple_background, waitress, white_background, japanese_clothes, pleated_skirt |
| 7 | 8 |  |  |  |  |  | 1girl, clothes_writing, solo, black_skirt, looking_at_viewer, short_sleeves, smile, bag, official_alternate_costume, open_mouth, simple_background, yellow_shirt, white_background, wristwatch, orange_shirt |
| 8 | 14 |  |  |  |  |  | 1girl, solo, looking_at_viewer, navel, smile, cleavage, cowboy_shot, open_mouth, orange_bikini, cloud, day, outdoors, side-tie_bikini_bottom, blush, collarbone, blue_sky, simple_background, striped, yellow_bikini |
| 9 | 23 |  |  |  |  |  | 1girl, highleg_swimsuit, looking_at_viewer, blush, bangs, collarbone, smile, competition_swimsuit, simple_background, solo, standing, open_mouth, alternate_costume, bare_shoulders, black_one-piece_swimsuit, covered_navel, cowboy_shot, thighs, white_background, cleavage |
| 10 | 8 |  |  |  |  |  | hetero, nipples, 1boy, 1girl, blush, japanese_clothes, open_mouth, penis, smile, paizuri, solo_focus, bar_censor, cum_on_breasts |
| 11 | 7 |  |  |  |  |  | 1girl, detached_collar, fake_animal_ears, playboy_bunny, rabbit_ears, solo, cleavage, looking_at_viewer, wrist_cuffs, bowtie, strapless_leotard, black_pantyhose, blush, simple_background, smile, alternate_costume, black_leotard, open_mouth, yellow_leotard |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | cowboy_shot | hakama_short_skirt | solo | green_hakama | looking_at_viewer | orange_kimono | smile | open_mouth | simple_background | yellow_kimono | index_finger_raised | one-hour_drawing_challenge | white_background | 2girls | wide_sleeves | blush | solo_focus | long_sleeves | flight_deck | arrow_(projectile) | quiver | yugake | hachimaki | single_glove | holding_bow_(weapon) | brown_gloves | bow_(weapon) | japanese_clothes | kyuudou | skirt | upper_body | necklace | official_alternate_costume | ribbed_sweater | collarbone | twitter_username | yellow_sweater | green_pants | jacket_around_waist | orange_sweater | plaid_jacket | bag | alternate_costume | employee_uniform | red_skirt | name_tag | short_sleeves | waitress | pleated_skirt | clothes_writing | black_skirt | yellow_shirt | wristwatch | orange_shirt | navel | cleavage | orange_bikini | cloud | day | outdoors | side-tie_bikini_bottom | blue_sky | striped | yellow_bikini | highleg_swimsuit | bangs | competition_swimsuit | standing | bare_shoulders | black_one-piece_swimsuit | covered_navel | thighs | hetero | nipples | 1boy | penis | paizuri | bar_censor | cum_on_breasts | detached_collar | fake_animal_ears | playboy_bunny | rabbit_ears | wrist_cuffs | bowtie | strapless_leotard | black_pantyhose | black_leotard | yellow_leotard |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:---------------------|:-------|:---------------|:--------------------|:----------------|:--------|:-------------|:--------------------|:----------------|:----------------------|:-----------------------------|:-------------------|:---------|:---------------|:--------|:-------------|:---------------|:--------------|:---------------------|:---------|:---------|:------------|:---------------|:-----------------------|:---------------|:---------------|:-------------------|:----------|:--------|:-------------|:-----------|:-----------------------------|:-----------------|:-------------|:-------------------|:-----------------|:--------------|:----------------------|:-----------------|:---------------|:------|:--------------------|:-------------------|:------------|:-----------|:----------------|:-----------|:----------------|:------------------|:--------------|:---------------|:-------------|:---------------|:--------|:-----------|:----------------|:--------|:------|:-----------|:-------------------------|:-----------|:----------|:----------------|:-------------------|:--------|:-----------------------|:-----------|:-----------------|:---------------------------|:----------------|:---------|:---------|:----------|:-------|:--------|:----------|:-------------|:-----------------|:------------------|:-------------------|:----------------|:--------------|:--------------|:---------|:--------------------|:------------------|:----------------|:-----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | | X | X | | X | X | X | X | X | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 26 |  |  |  |  |  | X | X | X | X | X | X | | | | | X | | | | | X | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | X | | X | | | X | | | | | | | X | | | | X | X | X | X | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | X | | X | | X | X | X | | | | X | | | | | X | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 14 |  |  |  |  |  | X | X | | X | | X | | X | X | X | | | | X | | | X | | X | | | | | | | | | | | | | | X | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | X | | X | | X | X | X | | | | X | | | X | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | X | | X | | X | X | X | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | X | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 14 |  |  |  |  |  | X | X | | X | | X | | X | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 23 |  |  |  |  |  | X | X | | X | | X | | X | X | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 10 | 8 |  |  |  |  |  | X | | | | | | | X | X | | | | | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | |
| 11 | 7 |  |  |  |  |  | X | | | X | | X | | X | X | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
joey234/mmlu-computer_security-neg-prepend-fix | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
splits:
- name: dev
num_bytes: 5063
num_examples: 5
- name: test
num_bytes: 229284
num_examples: 100
download_size: 13363
dataset_size: 234347
---
# Dataset Card for "mmlu-computer_security-neg-prepend-fix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thobauma/harmless-poisoned-0.03-chuela2502-murder | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 58402939.44335993
num_examples: 42537
download_size: 31364075
dataset_size: 58402939.44335993
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stellar025/sd-webui | ---
license: openrail
---
|
pharaouk/biology_dataset_standardized_cluster_3 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_standardized_cluster_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/VQAv2_minival_no_image_google_flan_t5_xl_mode_A_Q_rices_ns_25994 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_LAION_ViT_H_14_2B_with_openai_Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random_
num_bytes: 3710369
num_examples: 25994
download_size: 1335813
dataset_size: 3710369
---
# Dataset Card for "VQAv2_minival_no_image_google_flan_t5_xl_mode_A_Q_rices_ns_25994"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sam-liu-lmi/databricks-dolly-15k-alpaca-style | ---
license: apache-2.0
---
|
keras-dreambooth/rabbit-toy | ---
license: apache-2.0
size_categories:
- n<1K
tags:
- keras-dreambooth
- scifi
- diffusers
- text-to-image
---
## Dataset description
This dataset was used to fine-tune this [model](https://huggingface.co/keras-dreambooth/dreambooth_diffusion_toy)
## Demo
You can try with this [demo](https://huggingface.co/keras-dreambooth/dreambooth_diffusion_toy)
## Intended uses & limitations
Image of mother rabbit toy
|
rishitunu/EXTRADATA_ecc_crackdetector_dataset_exhaustive | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 508679985.419
num_examples: 23267
- name: test
num_bytes: 344446810.09
num_examples: 5817
download_size: 269873364
dataset_size: 853126795.5090001
---
# Dataset Card for "EXTRADATA_ecc_crackdetector_dataset_exhaustive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nguyenthanhdo/zac2023-math-en | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: explanation
dtype: string
- name: answer
dtype: string
splits:
- name: public_test
num_bytes: 31204
num_examples: 189
download_size: 18758
dataset_size: 31204
configs:
- config_name: default
data_files:
- split: public_test
path: data/public_test-*
---
|
open-llm-leaderboard/details_Yhyu13__LMCocktail-10.7B-v1 | ---
pretty_name: Evaluation run of yhyu13/LMCocktail-10.7B-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/yhyu13/LMCocktail-10.7B-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yhyu13__LMCocktail-10.7B-v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-23T17:18:52.546076](https://huggingface.co/datasets/open-llm-leaderboard/details_yhyu13__LMCocktail-10.7B-v1/blob/main/results_2023-12-23T17-18-52.546076.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6656979362421428,\n\
\ \"acc_stderr\": 0.031660298381466584,\n \"acc_norm\": 0.6665217090107124,\n\
\ \"acc_norm_stderr\": 0.032305792594458954,\n \"mc1\": 0.5642594859241126,\n\
\ \"mc1_stderr\": 0.01735834539886313,\n \"mc2\": 0.7102777882533455,\n\
\ \"mc2_stderr\": 0.015039392112656383\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.681740614334471,\n \"acc_stderr\": 0.013611993916971453,\n\
\ \"acc_norm\": 0.7064846416382252,\n \"acc_norm_stderr\": 0.013307250444941108\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7056363274248157,\n\
\ \"acc_stderr\": 0.004548247487546323,\n \"acc_norm\": 0.8812985461063533,\n\
\ \"acc_norm_stderr\": 0.0032277587155456044\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562429,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562429\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7368421052631579,\n \"acc_stderr\": 0.03583496176361072,\n\
\ \"acc_norm\": 0.7368421052631579,\n \"acc_norm_stderr\": 0.03583496176361072\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n\
\ \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \
\ \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n\
\ \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n\
\ \"acc_stderr\": 0.03583901754736413,\n \"acc_norm\": 0.6705202312138728,\n\
\ \"acc_norm_stderr\": 0.03583901754736413\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082636,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082636\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6297872340425532,\n \"acc_stderr\": 0.03156564682236785,\n\
\ \"acc_norm\": 0.6297872340425532,\n \"acc_norm_stderr\": 0.03156564682236785\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n\
\ \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4708994708994709,\n \"acc_stderr\": 0.02570765861415496,\n \"\
acc_norm\": 0.4708994708994709,\n \"acc_norm_stderr\": 0.02570765861415496\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n\
\ \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n\
\ \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823078,\n \"\
acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823078\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.03515895551165698,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.03515895551165698\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n\
\ \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8737373737373737,\n \"acc_stderr\": 0.02366435940288023,\n \"\
acc_norm\": 0.8737373737373737,\n \"acc_norm_stderr\": 0.02366435940288023\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633506,\n \
\ \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633506\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \
\ \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"\
acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5601851851851852,\n \"acc_stderr\": 0.0338517797604481,\n \"acc_norm\"\
: 0.5601851851851852,\n \"acc_norm_stderr\": 0.0338517797604481\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n\
\ \"acc_stderr\": 0.025195658428931796,\n \"acc_norm\": 0.8480392156862745,\n\
\ \"acc_norm_stderr\": 0.025195658428931796\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8565400843881856,\n \"acc_stderr\": 0.022818291821017012,\n\
\ \"acc_norm\": 0.8565400843881856,\n \"acc_norm_stderr\": 0.022818291821017012\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n\
\ \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n\
\ \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n\
\ \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n\
\ \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8033205619412516,\n\
\ \"acc_stderr\": 0.014214138556913917,\n \"acc_norm\": 0.8033205619412516,\n\
\ \"acc_norm_stderr\": 0.014214138556913917\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7630057803468208,\n \"acc_stderr\": 0.02289408248992599,\n\
\ \"acc_norm\": 0.7630057803468208,\n \"acc_norm_stderr\": 0.02289408248992599\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3787709497206704,\n\
\ \"acc_stderr\": 0.01622353351036512,\n \"acc_norm\": 0.3787709497206704,\n\
\ \"acc_norm_stderr\": 0.01622353351036512\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.024404394928087866,\n\
\ \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.024404394928087866\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n\
\ \"acc_stderr\": 0.02521804037341062,\n \"acc_norm\": 0.729903536977492,\n\
\ \"acc_norm_stderr\": 0.02521804037341062\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7932098765432098,\n \"acc_stderr\": 0.02253500670594284,\n\
\ \"acc_norm\": 0.7932098765432098,\n \"acc_norm_stderr\": 0.02253500670594284\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5035460992907801,\n \"acc_stderr\": 0.02982674915328092,\n \
\ \"acc_norm\": 0.5035460992907801,\n \"acc_norm_stderr\": 0.02982674915328092\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4954367666232073,\n\
\ \"acc_stderr\": 0.012769704263117519,\n \"acc_norm\": 0.4954367666232073,\n\
\ \"acc_norm_stderr\": 0.012769704263117519\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.02691748122437721,\n\
\ \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.02691748122437721\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6911764705882353,\n \"acc_stderr\": 0.018690850273595294,\n \
\ \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.018690850273595294\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n\
\ \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n\
\ \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.03126781714663179,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.03126781714663179\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5642594859241126,\n\
\ \"mc1_stderr\": 0.01735834539886313,\n \"mc2\": 0.7102777882533455,\n\
\ \"mc2_stderr\": 0.015039392112656383\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781091\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6497346474601972,\n \
\ \"acc_stderr\": 0.013140409455571284\n }\n}\n```"
repo_url: https://huggingface.co/yhyu13/LMCocktail-10.7B-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|arc:challenge|25_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|arc:challenge|25_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|gsm8k|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|gsm8k|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hellaswag|10_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hellaswag|10_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-05-29.674400.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-18-52.546076.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-23T17-18-52.546076.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- '**/details_harness|winogrande|5_2023-12-23T17-05-29.674400.parquet'
- split: 2023_12_23T17_18_52.546076
path:
- '**/details_harness|winogrande|5_2023-12-23T17-18-52.546076.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-23T17-18-52.546076.parquet'
- config_name: results
data_files:
- split: 2023_12_23T17_05_29.674400
path:
- results_2023-12-23T17-05-29.674400.parquet
- split: 2023_12_23T17_18_52.546076
path:
- results_2023-12-23T17-18-52.546076.parquet
- split: latest
path:
- results_2023-12-23T17-18-52.546076.parquet
---
# Dataset Card for Evaluation run of yhyu13/LMCocktail-10.7B-v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/yhyu13/LMCocktail-10.7B-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yhyu13__LMCocktail-10.7B-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-23T17:18:52.546076](https://huggingface.co/datasets/open-llm-leaderboard/details_yhyu13__LMCocktail-10.7B-v1/blob/main/results_2023-12-23T17-18-52.546076.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6656979362421428,
"acc_stderr": 0.031660298381466584,
"acc_norm": 0.6665217090107124,
"acc_norm_stderr": 0.032305792594458954,
"mc1": 0.5642594859241126,
"mc1_stderr": 0.01735834539886313,
"mc2": 0.7102777882533455,
"mc2_stderr": 0.015039392112656383
},
"harness|arc:challenge|25": {
"acc": 0.681740614334471,
"acc_stderr": 0.013611993916971453,
"acc_norm": 0.7064846416382252,
"acc_norm_stderr": 0.013307250444941108
},
"harness|hellaswag|10": {
"acc": 0.7056363274248157,
"acc_stderr": 0.004548247487546323,
"acc_norm": 0.8812985461063533,
"acc_norm_stderr": 0.0032277587155456044
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562429,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562429
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7368421052631579,
"acc_stderr": 0.03583496176361072,
"acc_norm": 0.7368421052631579,
"acc_norm_stderr": 0.03583496176361072
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.02881561571343211,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.02881561571343211
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.05021167315686779,
"acc_norm": 0.52,
"acc_norm_stderr": 0.05021167315686779
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.03583901754736413,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.03583901754736413
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082636,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082636
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6297872340425532,
"acc_stderr": 0.03156564682236785,
"acc_norm": 0.6297872340425532,
"acc_norm_stderr": 0.03156564682236785
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6344827586206897,
"acc_stderr": 0.040131241954243856,
"acc_norm": 0.6344827586206897,
"acc_norm_stderr": 0.040131241954243856
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4708994708994709,
"acc_stderr": 0.02570765861415496,
"acc_norm": 0.4708994708994709,
"acc_norm_stderr": 0.02570765861415496
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823078,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823078
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.03515895551165698,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.03515895551165698
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.806060606060606,
"acc_stderr": 0.03087414513656209,
"acc_norm": 0.806060606060606,
"acc_norm_stderr": 0.03087414513656209
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8737373737373737,
"acc_stderr": 0.02366435940288023,
"acc_norm": 0.8737373737373737,
"acc_norm_stderr": 0.02366435940288023
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633506,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633506
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37407407407407406,
"acc_stderr": 0.029502861128955286,
"acc_norm": 0.37407407407407406,
"acc_norm_stderr": 0.029502861128955286
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634332,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634332
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374308,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374308
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5601851851851852,
"acc_stderr": 0.0338517797604481,
"acc_norm": 0.5601851851851852,
"acc_norm_stderr": 0.0338517797604481
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931796,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931796
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8565400843881856,
"acc_stderr": 0.022818291821017012,
"acc_norm": 0.8565400843881856,
"acc_norm_stderr": 0.022818291821017012
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.03675668832233188,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.03675668832233188
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.0230866350868414,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.0230866350868414
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8033205619412516,
"acc_stderr": 0.014214138556913917,
"acc_norm": 0.8033205619412516,
"acc_norm_stderr": 0.014214138556913917
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7630057803468208,
"acc_stderr": 0.02289408248992599,
"acc_norm": 0.7630057803468208,
"acc_norm_stderr": 0.02289408248992599
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3787709497206704,
"acc_stderr": 0.01622353351036512,
"acc_norm": 0.3787709497206704,
"acc_norm_stderr": 0.01622353351036512
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.024404394928087866,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.024404394928087866
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.729903536977492,
"acc_stderr": 0.02521804037341062,
"acc_norm": 0.729903536977492,
"acc_norm_stderr": 0.02521804037341062
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7932098765432098,
"acc_stderr": 0.02253500670594284,
"acc_norm": 0.7932098765432098,
"acc_norm_stderr": 0.02253500670594284
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5035460992907801,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.5035460992907801,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4954367666232073,
"acc_stderr": 0.012769704263117519,
"acc_norm": 0.4954367666232073,
"acc_norm_stderr": 0.012769704263117519
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7316176470588235,
"acc_stderr": 0.02691748122437721,
"acc_norm": 0.7316176470588235,
"acc_norm_stderr": 0.02691748122437721
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6911764705882353,
"acc_stderr": 0.018690850273595294,
"acc_norm": 0.6911764705882353,
"acc_norm_stderr": 0.018690850273595294
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598052,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598052
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03126781714663179,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03126781714663179
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5642594859241126,
"mc1_stderr": 0.01735834539886313,
"mc2": 0.7102777882533455,
"mc2_stderr": 0.015039392112656383
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781091
},
"harness|gsm8k|5": {
"acc": 0.6497346474601972,
"acc_stderr": 0.013140409455571284
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Gohype/Lairmenor | ---
license: openrail
---
|
Ironov/typography | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1816640.0
num_examples: 80
download_size: 1697078
dataset_size: 1816640.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BangumiBase/rwbyhyousetsuteikoku | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Rwby - Hyousetsu Teikoku
This is the image base of bangumi RWBY - Hyousetsu Teikoku, we detected 29 characters, 2529 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 229 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 49 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 34 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 13 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 38 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 76 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 18 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 14 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 10 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 550 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 19 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 9 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 322 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 25 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 55 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 33 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 177 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 27 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 376 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 16 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 19 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 114 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 10 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 6 | [Download](23/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 24 | 72 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 23 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 7 | [Download](26/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 27 | 14 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 174 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
EvanLong/languages-translate-chinese | ---
license: openrail
---
|
mwong/climate-claim-related | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and, ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. |
SilvioLima/raw_data | ---
dataset_info:
features:
- name: source
dtype: string
- name: domain
dtype: string
- name: sentence
dtype: string
- name: triples
dtype: string
splits:
- name: train
num_bytes: 4401368
num_examples: 13513
download_size: 2334364
dataset_size: 4401368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- feature-extraction
- text-generation
tags:
- code
size_categories:
- 10K<n<100K
---

|
shidowake/Doctor-Shotgun_capybara-sharegpt_subset_split_6 | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 9059570.785955267
num_examples: 2000
download_size: 4801926
dataset_size: 9059570.785955267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
infCapital/financial_phrasebank_en | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2048295
num_examples: 14780
download_size: 1185669
dataset_size: 2048295
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "financial_phrasebank_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AngelBottomless/Danbooru2023-Webp | ---
license: mit
---
|
amuvarma/crema-clean | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
- name: original_audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 2120008158.875
num_examples: 5209
- name: validation
num_bytes: 453020846.5
num_examples: 1116
- name: test
num_bytes: 455510657.375
num_examples: 1117
download_size: 1055101242
dataset_size: 3028539662.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
idleheroevich2/Burgdorf | ---
license: unknown
---
|
pudim0/minhavoz | ---
license: openrail
---
|
argilla/notus-uf-dpo-closest-rejected | ---
dataset_info:
features:
- name: source
dtype: string
- name: instruction
dtype: string
- name: chosen_model
dtype: string
- name: chosen_rating
dtype: float64
- name: chosen_response
dtype: string
- name: rejected_responses
sequence: string
- name: rejected_ratings
sequence: float64
- name: rejected_response
dtype: string
- name: chosen_avg_rating
dtype: float64
- name: rejected_avg_rating
dtype: float64
splits:
- name: train
num_bytes: 396662574
num_examples: 63620
download_size: 207449481
dataset_size: 396662574
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PardeepRassani/fashion_image_caption-100-v2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 22820471.0
num_examples: 100
download_size: 22820373
dataset_size: 22820471.0
---
# Dataset Card for "fashion_image_caption-100-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_ValiantLabs__Fireplace-13b | ---
pretty_name: Evaluation run of ValiantLabs/Fireplace-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ValiantLabs/Fireplace-13b](https://huggingface.co/ValiantLabs/Fireplace-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ValiantLabs__Fireplace-13b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-18T22:29:29.742832](https://huggingface.co/datasets/open-llm-leaderboard/details_ValiantLabs__Fireplace-13b/blob/main/results_2024-01-18T22-29-29.742832.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4379234507249955,\n\
\ \"acc_stderr\": 0.034366003843291505,\n \"acc_norm\": 0.44072254848921605,\n\
\ \"acc_norm_stderr\": 0.03509834671600673,\n \"mc1\": 0.3182374541003672,\n\
\ \"mc1_stderr\": 0.01630598864892061,\n \"mc2\": 0.48240026363735705,\n\
\ \"mc2_stderr\": 0.015361662513373581\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4325938566552901,\n \"acc_stderr\": 0.014478005694182528,\n\
\ \"acc_norm\": 0.47696245733788395,\n \"acc_norm_stderr\": 0.01459587320535827\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5217088229436367,\n\
\ \"acc_stderr\": 0.004985076094464753,\n \"acc_norm\": 0.6960764787890859,\n\
\ \"acc_norm_stderr\": 0.0045901000501988335\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.35555555555555557,\n\
\ \"acc_stderr\": 0.04135176749720386,\n \"acc_norm\": 0.35555555555555557,\n\
\ \"acc_norm_stderr\": 0.04135176749720386\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.04033565667848319,\n\
\ \"acc_norm\": 0.4342105263157895,\n \"acc_norm_stderr\": 0.04033565667848319\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.45,\n\
\ \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \
\ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.4188679245283019,\n \"acc_stderr\": 0.03036505082911521,\n\
\ \"acc_norm\": 0.4188679245283019,\n \"acc_norm_stderr\": 0.03036505082911521\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4027777777777778,\n\
\ \"acc_stderr\": 0.04101405519842426,\n \"acc_norm\": 0.4027777777777778,\n\
\ \"acc_norm_stderr\": 0.04101405519842426\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.36416184971098264,\n\
\ \"acc_stderr\": 0.03669072477416908,\n \"acc_norm\": 0.36416184971098264,\n\
\ \"acc_norm_stderr\": 0.03669072477416908\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.042207736591714534,\n\
\ \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.042207736591714534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n\
\ \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.425531914893617,\n \"acc_stderr\": 0.03232146916224469,\n\
\ \"acc_norm\": 0.425531914893617,\n \"acc_norm_stderr\": 0.03232146916224469\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n\
\ \"acc_stderr\": 0.0433913832257986,\n \"acc_norm\": 0.30701754385964913,\n\
\ \"acc_norm_stderr\": 0.0433913832257986\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.42758620689655175,\n \"acc_stderr\": 0.041227371113703316,\n\
\ \"acc_norm\": 0.42758620689655175,\n \"acc_norm_stderr\": 0.041227371113703316\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2804232804232804,\n \"acc_stderr\": 0.02313528797432564,\n \"\
acc_norm\": 0.2804232804232804,\n \"acc_norm_stderr\": 0.02313528797432564\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n\
\ \"acc_stderr\": 0.04073524322147125,\n \"acc_norm\": 0.29365079365079366,\n\
\ \"acc_norm_stderr\": 0.04073524322147125\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.45161290322580644,\n\
\ \"acc_stderr\": 0.028310500348568385,\n \"acc_norm\": 0.45161290322580644,\n\
\ \"acc_norm_stderr\": 0.028310500348568385\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.031785297106427524,\n\
\ \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.031785297106427524\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\"\
: 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5393939393939394,\n \"acc_stderr\": 0.03892207016552012,\n\
\ \"acc_norm\": 0.5393939393939394,\n \"acc_norm_stderr\": 0.03892207016552012\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.5707070707070707,\n \"acc_stderr\": 0.03526552724601199,\n \"\
acc_norm\": 0.5707070707070707,\n \"acc_norm_stderr\": 0.03526552724601199\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.49740932642487046,\n \"acc_stderr\": 0.03608390745384486,\n\
\ \"acc_norm\": 0.49740932642487046,\n \"acc_norm_stderr\": 0.03608390745384486\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.358974358974359,\n \"acc_stderr\": 0.024321738484602364,\n \
\ \"acc_norm\": 0.358974358974359,\n \"acc_norm_stderr\": 0.024321738484602364\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712177,\n \
\ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712177\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.37815126050420167,\n \"acc_stderr\": 0.031499305777849054,\n\
\ \"acc_norm\": 0.37815126050420167,\n \"acc_norm_stderr\": 0.031499305777849054\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.25165562913907286,\n \"acc_stderr\": 0.035433042343899844,\n \"\
acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.035433042343899844\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.5394495412844037,\n \"acc_stderr\": 0.02137049460999509,\n \"\
acc_norm\": 0.5394495412844037,\n \"acc_norm_stderr\": 0.02137049460999509\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.3194444444444444,\n \"acc_stderr\": 0.0317987634217685,\n \"acc_norm\"\
: 0.3194444444444444,\n \"acc_norm_stderr\": 0.0317987634217685\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5637254901960784,\n\
\ \"acc_stderr\": 0.03480693138457039,\n \"acc_norm\": 0.5637254901960784,\n\
\ \"acc_norm_stderr\": 0.03480693138457039\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.6286919831223629,\n \"acc_stderr\": 0.03145068600744859,\n\
\ \"acc_norm\": 0.6286919831223629,\n \"acc_norm_stderr\": 0.03145068600744859\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5246636771300448,\n\
\ \"acc_stderr\": 0.03351695167652628,\n \"acc_norm\": 0.5246636771300448,\n\
\ \"acc_norm_stderr\": 0.03351695167652628\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.40458015267175573,\n \"acc_stderr\": 0.043046937953806645,\n\
\ \"acc_norm\": 0.40458015267175573,\n \"acc_norm_stderr\": 0.043046937953806645\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6446280991735537,\n \"acc_stderr\": 0.04369236326573981,\n \"\
acc_norm\": 0.6446280991735537,\n \"acc_norm_stderr\": 0.04369236326573981\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5185185185185185,\n\
\ \"acc_stderr\": 0.04830366024635331,\n \"acc_norm\": 0.5185185185185185,\n\
\ \"acc_norm_stderr\": 0.04830366024635331\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5950920245398773,\n \"acc_stderr\": 0.03856672163548913,\n\
\ \"acc_norm\": 0.5950920245398773,\n \"acc_norm_stderr\": 0.03856672163548913\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n\
\ \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n\
\ \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.5145631067961165,\n \"acc_stderr\": 0.04948637324026637,\n\
\ \"acc_norm\": 0.5145631067961165,\n \"acc_norm_stderr\": 0.04948637324026637\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7350427350427351,\n\
\ \"acc_stderr\": 0.028911208802749448,\n \"acc_norm\": 0.7350427350427351,\n\
\ \"acc_norm_stderr\": 0.028911208802749448\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.561941251596424,\n\
\ \"acc_stderr\": 0.017742232238257244,\n \"acc_norm\": 0.561941251596424,\n\
\ \"acc_norm_stderr\": 0.017742232238257244\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.47398843930635837,\n \"acc_stderr\": 0.02688264343402289,\n\
\ \"acc_norm\": 0.47398843930635837,\n \"acc_norm_stderr\": 0.02688264343402289\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2581005586592179,\n\
\ \"acc_stderr\": 0.014635185616527813,\n \"acc_norm\": 0.2581005586592179,\n\
\ \"acc_norm_stderr\": 0.014635185616527813\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.42810457516339867,\n \"acc_stderr\": 0.028332397483664274,\n\
\ \"acc_norm\": 0.42810457516339867,\n \"acc_norm_stderr\": 0.028332397483664274\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.49517684887459806,\n\
\ \"acc_stderr\": 0.02839677044411129,\n \"acc_norm\": 0.49517684887459806,\n\
\ \"acc_norm_stderr\": 0.02839677044411129\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.4845679012345679,\n \"acc_stderr\": 0.0278074900442762,\n\
\ \"acc_norm\": 0.4845679012345679,\n \"acc_norm_stderr\": 0.0278074900442762\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.35106382978723405,\n \"acc_stderr\": 0.028473501272963768,\n \
\ \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.028473501272963768\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.32790091264667537,\n\
\ \"acc_stderr\": 0.011989936640666535,\n \"acc_norm\": 0.32790091264667537,\n\
\ \"acc_norm_stderr\": 0.011989936640666535\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.29044117647058826,\n \"acc_stderr\": 0.02757646862274053,\n\
\ \"acc_norm\": 0.29044117647058826,\n \"acc_norm_stderr\": 0.02757646862274053\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.3872549019607843,\n \"acc_stderr\": 0.01970687580408562,\n \
\ \"acc_norm\": 0.3872549019607843,\n \"acc_norm_stderr\": 0.01970687580408562\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5272727272727272,\n\
\ \"acc_stderr\": 0.04782001791380061,\n \"acc_norm\": 0.5272727272727272,\n\
\ \"acc_norm_stderr\": 0.04782001791380061\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.49795918367346936,\n \"acc_stderr\": 0.0320089533497105,\n\
\ \"acc_norm\": 0.49795918367346936,\n \"acc_norm_stderr\": 0.0320089533497105\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.5174129353233831,\n\
\ \"acc_stderr\": 0.035333892347392454,\n \"acc_norm\": 0.5174129353233831,\n\
\ \"acc_norm_stderr\": 0.035333892347392454\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695238,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695238\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6023391812865497,\n \"acc_stderr\": 0.0375363895576169,\n\
\ \"acc_norm\": 0.6023391812865497,\n \"acc_norm_stderr\": 0.0375363895576169\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3182374541003672,\n\
\ \"mc1_stderr\": 0.01630598864892061,\n \"mc2\": 0.48240026363735705,\n\
\ \"mc2_stderr\": 0.015361662513373581\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6716653512233622,\n \"acc_stderr\": 0.01319829944971789\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2577710386656558,\n \
\ \"acc_stderr\": 0.012048370213576602\n }\n}\n```"
repo_url: https://huggingface.co/ValiantLabs/Fireplace-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|arc:challenge|25_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|gsm8k|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hellaswag|10_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T22-29-29.742832.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-18T22-29-29.742832.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- '**/details_harness|winogrande|5_2024-01-18T22-29-29.742832.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-18T22-29-29.742832.parquet'
- config_name: results
data_files:
- split: 2024_01_18T22_29_29.742832
path:
- results_2024-01-18T22-29-29.742832.parquet
- split: latest
path:
- results_2024-01-18T22-29-29.742832.parquet
---
# Dataset Card for Evaluation run of ValiantLabs/Fireplace-13b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ValiantLabs/Fireplace-13b](https://huggingface.co/ValiantLabs/Fireplace-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ValiantLabs__Fireplace-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-18T22:29:29.742832](https://huggingface.co/datasets/open-llm-leaderboard/details_ValiantLabs__Fireplace-13b/blob/main/results_2024-01-18T22-29-29.742832.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4379234507249955,
"acc_stderr": 0.034366003843291505,
"acc_norm": 0.44072254848921605,
"acc_norm_stderr": 0.03509834671600673,
"mc1": 0.3182374541003672,
"mc1_stderr": 0.01630598864892061,
"mc2": 0.48240026363735705,
"mc2_stderr": 0.015361662513373581
},
"harness|arc:challenge|25": {
"acc": 0.4325938566552901,
"acc_stderr": 0.014478005694182528,
"acc_norm": 0.47696245733788395,
"acc_norm_stderr": 0.01459587320535827
},
"harness|hellaswag|10": {
"acc": 0.5217088229436367,
"acc_stderr": 0.004985076094464753,
"acc_norm": 0.6960764787890859,
"acc_norm_stderr": 0.0045901000501988335
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.04135176749720386,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.04135176749720386
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4342105263157895,
"acc_stderr": 0.04033565667848319,
"acc_norm": 0.4342105263157895,
"acc_norm_stderr": 0.04033565667848319
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4188679245283019,
"acc_stderr": 0.03036505082911521,
"acc_norm": 0.4188679245283019,
"acc_norm_stderr": 0.03036505082911521
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4027777777777778,
"acc_stderr": 0.04101405519842426,
"acc_norm": 0.4027777777777778,
"acc_norm_stderr": 0.04101405519842426
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.36416184971098264,
"acc_stderr": 0.03669072477416908,
"acc_norm": 0.36416184971098264,
"acc_norm_stderr": 0.03669072477416908
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.23529411764705882,
"acc_stderr": 0.042207736591714534,
"acc_norm": 0.23529411764705882,
"acc_norm_stderr": 0.042207736591714534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.425531914893617,
"acc_stderr": 0.03232146916224469,
"acc_norm": 0.425531914893617,
"acc_norm_stderr": 0.03232146916224469
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.0433913832257986,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.0433913832257986
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.42758620689655175,
"acc_stderr": 0.041227371113703316,
"acc_norm": 0.42758620689655175,
"acc_norm_stderr": 0.041227371113703316
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2804232804232804,
"acc_stderr": 0.02313528797432564,
"acc_norm": 0.2804232804232804,
"acc_norm_stderr": 0.02313528797432564
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.04073524322147125,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.04073524322147125
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.45161290322580644,
"acc_stderr": 0.028310500348568385,
"acc_norm": 0.45161290322580644,
"acc_norm_stderr": 0.028310500348568385
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.031785297106427524,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.031785297106427524
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5393939393939394,
"acc_stderr": 0.03892207016552012,
"acc_norm": 0.5393939393939394,
"acc_norm_stderr": 0.03892207016552012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5707070707070707,
"acc_stderr": 0.03526552724601199,
"acc_norm": 0.5707070707070707,
"acc_norm_stderr": 0.03526552724601199
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.49740932642487046,
"acc_stderr": 0.03608390745384486,
"acc_norm": 0.49740932642487046,
"acc_norm_stderr": 0.03608390745384486
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.358974358974359,
"acc_stderr": 0.024321738484602364,
"acc_norm": 0.358974358974359,
"acc_norm_stderr": 0.024321738484602364
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712177,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712177
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.37815126050420167,
"acc_stderr": 0.031499305777849054,
"acc_norm": 0.37815126050420167,
"acc_norm_stderr": 0.031499305777849054
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.035433042343899844,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.035433042343899844
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5394495412844037,
"acc_stderr": 0.02137049460999509,
"acc_norm": 0.5394495412844037,
"acc_norm_stderr": 0.02137049460999509
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.3194444444444444,
"acc_stderr": 0.0317987634217685,
"acc_norm": 0.3194444444444444,
"acc_norm_stderr": 0.0317987634217685
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5637254901960784,
"acc_stderr": 0.03480693138457039,
"acc_norm": 0.5637254901960784,
"acc_norm_stderr": 0.03480693138457039
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6286919831223629,
"acc_stderr": 0.03145068600744859,
"acc_norm": 0.6286919831223629,
"acc_norm_stderr": 0.03145068600744859
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5246636771300448,
"acc_stderr": 0.03351695167652628,
"acc_norm": 0.5246636771300448,
"acc_norm_stderr": 0.03351695167652628
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.40458015267175573,
"acc_stderr": 0.043046937953806645,
"acc_norm": 0.40458015267175573,
"acc_norm_stderr": 0.043046937953806645
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6446280991735537,
"acc_stderr": 0.04369236326573981,
"acc_norm": 0.6446280991735537,
"acc_norm_stderr": 0.04369236326573981
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.04830366024635331,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.04830366024635331
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5950920245398773,
"acc_stderr": 0.03856672163548913,
"acc_norm": 0.5950920245398773,
"acc_norm_stderr": 0.03856672163548913
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.5145631067961165,
"acc_stderr": 0.04948637324026637,
"acc_norm": 0.5145631067961165,
"acc_norm_stderr": 0.04948637324026637
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7350427350427351,
"acc_stderr": 0.028911208802749448,
"acc_norm": 0.7350427350427351,
"acc_norm_stderr": 0.028911208802749448
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.561941251596424,
"acc_stderr": 0.017742232238257244,
"acc_norm": 0.561941251596424,
"acc_norm_stderr": 0.017742232238257244
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.47398843930635837,
"acc_stderr": 0.02688264343402289,
"acc_norm": 0.47398843930635837,
"acc_norm_stderr": 0.02688264343402289
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2581005586592179,
"acc_stderr": 0.014635185616527813,
"acc_norm": 0.2581005586592179,
"acc_norm_stderr": 0.014635185616527813
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.42810457516339867,
"acc_stderr": 0.028332397483664274,
"acc_norm": 0.42810457516339867,
"acc_norm_stderr": 0.028332397483664274
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.49517684887459806,
"acc_stderr": 0.02839677044411129,
"acc_norm": 0.49517684887459806,
"acc_norm_stderr": 0.02839677044411129
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.4845679012345679,
"acc_stderr": 0.0278074900442762,
"acc_norm": 0.4845679012345679,
"acc_norm_stderr": 0.0278074900442762
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.028473501272963768,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.028473501272963768
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.32790091264667537,
"acc_stderr": 0.011989936640666535,
"acc_norm": 0.32790091264667537,
"acc_norm_stderr": 0.011989936640666535
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.29044117647058826,
"acc_stderr": 0.02757646862274053,
"acc_norm": 0.29044117647058826,
"acc_norm_stderr": 0.02757646862274053
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.3872549019607843,
"acc_stderr": 0.01970687580408562,
"acc_norm": 0.3872549019607843,
"acc_norm_stderr": 0.01970687580408562
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5272727272727272,
"acc_stderr": 0.04782001791380061,
"acc_norm": 0.5272727272727272,
"acc_norm_stderr": 0.04782001791380061
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.49795918367346936,
"acc_stderr": 0.0320089533497105,
"acc_norm": 0.49795918367346936,
"acc_norm_stderr": 0.0320089533497105
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.5174129353233831,
"acc_stderr": 0.035333892347392454,
"acc_norm": 0.5174129353233831,
"acc_norm_stderr": 0.035333892347392454
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695238,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695238
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6023391812865497,
"acc_stderr": 0.0375363895576169,
"acc_norm": 0.6023391812865497,
"acc_norm_stderr": 0.0375363895576169
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3182374541003672,
"mc1_stderr": 0.01630598864892061,
"mc2": 0.48240026363735705,
"mc2_stderr": 0.015361662513373581
},
"harness|winogrande|5": {
"acc": 0.6716653512233622,
"acc_stderr": 0.01319829944971789
},
"harness|gsm8k|5": {
"acc": 0.2577710386656558,
"acc_stderr": 0.012048370213576602
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
peperohb/modern-art | ---
license: osl-3.0
---
|
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-776ce2-51767145319 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: zhangfx7/deberta-base-finetuned-squad-pruned0.1
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: zhangfx7/deberta-base-finetuned-squad-pruned0.1
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@tp](https://huggingface.co/tp) for evaluating this model. |
0-hero/distilabel-math-preference-dpo | ---
dataset_info:
features:
- name: metadata
dtype: string
id: metadata
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected
dtype: string
- name: rejected_rating
dtype: float64
splits:
- name: train
num_bytes: 7049182
num_examples: 2418
download_size: 2862646
dataset_size: 7049182
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AfnanTS/Arabic-Lama-conceptNet-ORG-new | ---
dataset_info:
features:
- name: arSubject
dtype: string
- name: arPredicate
dtype: string
- name: arSentence
dtype: string
- name: oldArObject
dtype: string
- name: arObject
dtype: string
- name: maskedArSentence
dtype: string
- name: enSentence
dtype: string
- name: enSubject
dtype: string
- name: enPredicate
dtype: string
- name: enObject
dtype: string
splits:
- name: train
num_bytes: 4576285
num_examples: 14811
download_size: 1997110
dataset_size: 4576285
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jpft/danbooru2023 | ---
license: mit
task_categories:
- image-classification
- image-to-image
- text-to-image
language:
- en
- ja
pretty_name: danbooru2023
size_categories:
- 1M<n<10M
viewer: false
---
<img src="https://huggingface.co/datasets/nyanko7/danbooru2023/resolve/main/cover.webp" alt="cover" width="750"/>
# Danbooru2023: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset
<!-- Provide a quick summary of the dataset. -->
Danbooru2023 is a large-scale anime image dataset with over 5 million images contributed and annotated in detail by an enthusiast community. Image tags cover aspects like characters, scenes, copyrights, artists, etc with an average of 30 tags per image.
Danbooru is a veteran anime image board with high-quality images and extensive tag metadata. The dataset can be used to train image classification, multi-label tagging, character detection, generative models, and other computer vision tasks.
- **Shared by:** Nyanko Devs
- **Language(s):** English, Japanese
- **License:** MIT
This dataset is built on the top of [danbooru2021](https://gwern.net/danbooru2021). We expands the dataset to include images up to ID #6,857,737, adding over 1.8 million additional images and total size is now approximately 8 terabytes (8,000 GB).
## Use
## Format
The goal of the dataset is to be as easy as possible to use immediately, avoiding obscure file formats, while allowing simultaneous research & seeding of the torrent, with easy updates.
Images are provided in the full original form (be that JPG, PNG, GIF or otherwise) for reference/archival purposes, and bucketed into 1000 subdirectories 0000–0999 (0-padded), which is the Danbooru ID modulo 1000 (ie. all images in 0999/ have an ID ending in ‘999’); IDs can be turned into paths by dividing & padding (eg. in Bash, BUCKET=$(printf "%04d" $(( ID % 1000 )) )) and then the file is at {original,512px}/$BUCKET/$ID.$EXT.
The reason for the bucketing is that a single directory would cause pathological filesystem performance, and modulo ID is a simple hash which spreads images evenly without requiring additional future directories to be made or a filesystem IO to check where the file is. The ID is not zero-padded and files end in the relevant extension, hence the file layout looks like this:
```bash
$ tree / | less
/
├── danbooru2023 -> /mnt/diffusionstorage/workspace/danbooru/
│ ├── metadata
│ ├── readme.md
│ ├── original
│ │ ├── 0000 -> data-0000.tar
│ │ ├── 0001 -> data-0001.tar
│ │ │ ├── 10001.jpg
│ │ │ ├── 210001.png
│ │ │ ├── 3120001.webp
│ │ │ ├── 6513001.jpg
```
Currently represented file extensions are: avi/bmp/gif/html/jpeg/jpg/mp3/mp4/mpg/pdf/png/rar/swf/webm/wmv/zip.
Raw original files are treacherous. Be careful if working with the original dataset. There are many odd files: truncated, non-sRGB colorspace, wrong file extensions (eg. some PNGs have .jpg extensions like original/0146/1525146.jpg or original/0558/1422558.jpg), etc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.