datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
sankettgorey/L1_tabular_data | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 146553128.0
num_examples: 560
- name: test
num_bytes: 18313783.5
num_examples: 70
- name: validation
num_bytes: 18343643.5
num_examples: 70
download_size: 152684335
dataset_size: 183210555.0
---
# Dataset Card for "L1_tabular_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yingda/test | ---
license: apache-2.0
---
|
xibaozichenchog/xi | ---
license: openrail
---
|
singlelinexyz/singlelines_raster | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 340025164.8
num_examples: 1923
download_size: 275177264
dataset_size: 340025164.8
---
# Dataset Card for "singlelines_raster"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Miniex/NateJp.zip | ---
license: openrail
---
|
RikeshSilwal/WhisperPreprocessedTrain | ---
license: apache-2.0
---
|
imranraad/github-emotion-surprise | ---
task_categories:
- text-classification
---
# AutoTrain Dataset for project: github-emotion-love
## Dataset Description
Dataset used in the paper: Imran et al., ["Data Augmentation for Improving Emotion Recognition in Software Engineering Communication"](https://arxiv.org/abs/2208.05573), ASE-2022.
This is an annotated dataset of 2000 GitHub comments. Six basic emotions are annotated. They are Anger, Love, Fear, Joy, Sadness and Surprise. This repository contains annotations of all emotions.
## Dataset Structure
Dataset is in CSV format. The columns are:
```id, modified_comment, Anger, Love, Fear, Joy, Sadness, Surprise```
Here, `id` is a unique id for each comment. Each emotion is marked as 1 or 0.
### Dataset Splits
This dataset is split into a train and test split. The split sizes are as follows:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1600 |
| test | 400 |
|
jorge-henao/ask2democracy-cfqa-pension | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: topics
sequence: string
splits:
- name: train
num_bytes: 2464607
num_examples: 1069
download_size: 237794
dataset_size: 2464607
---
# Dataset Card for "ask2democracy-cfqa-pension"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_215 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1089357020
num_examples: 213935
download_size: 1113114537
dataset_size: 1089357020
---
# Dataset Card for "chunk_215"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Melricflash/CW_MedAbstractsAlt | ---
license: apache-2.0
---
|
kaleemWaheed/twitter_dataset_1713192900 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 35020
num_examples: 87
download_size: 19525
dataset_size: 35020
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Weni/LLM-base-v2 | ---
language:
- pt
size_categories:
- 10K<n<100K
task_categories:
- question-answering
pretty_name: ' LLM-Base-v2'
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: float64
- name: question
dtype: string
- name: resposta
dtype: string
- name: context
dtype: string
- name: correct_ans
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 29624508
num_examples: 12175
download_size: 9511809
dataset_size: 29624508
---
# Dataset Card for "LLM-base-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Zenseact/ZOD | ---
license: cc-by-sa-4.0
task_categories:
- object-detection
- image-classification
- depth-estimation
- image-segmentation
language:
- en
tags:
- Autonomous Driving
- Autonomous Vehicles
- Images
- Lidar
- GNSS/IMU
- Vehicle Data
- Satellite Positioning
pretty_name: ZOD
size_categories:
- 10K<n<100K
paperswithcode_id: zenseact-open-dataset
---
# Dataset Card for ZOD
The Zenseact Open Dataset (ZOD) is a large multi-modal autonomous driving (AD) dataset created by researchers at Zenseact. It was collected over a 2-year period in 14 different European counties, using a fleet of vehicles equipped with a full sensor suite. The dataset consists of three subsets: Frames, Sequences, and Drives, designed to encompass both data diversity and support for spatiotemporal learning, sensor fusion, localization, and mapping. Together with the data, we have developed a SDK containing tutorials, downloading functionality, and a dataset API for easy access to the data. The development kit is available on Github.
## Dataset Details
### Dataset Description
ZOD is a large-scale diverse, multimodal AD dataset, collected over two years in various European countries.
It has highest-range and resoutions sensors and contains data from various traffic scenarios.
- **Curated by:** Zenseact AB
- **Funded by:** Zenseact AB
- **Shared by:** Zenseact AB
- **Language(s):** English
- **License:** CC BY-SA
### Dataset Sources [optional]
- **Repository:** https://github.com/zenseact/zod
- **Paper:** https://arxiv.org/abs/2305.02008
- **Website:** https://zod.zenseact.com/
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers
The Zenseact Open Dataset (ZOD) is the property of Zenseact AB (© 2022 Zenseact AB), and is collected by several developmental vehicles with an identical sensor layout.
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Personal and Sensitive Information
To protect the privacy of every individual in our dataset, and to comply with privacy regulations such as the European Union’s General Data Protection Regulation (GDPR), we employ third-party services (Brighter AI) to anonymize all images in our dataset. The anonymization should protect all personally identifiable information in the images, including faces and license plates.
For Frames we supply two types of anonymization, namely Deep Neural Anonymization Technology (DNAT) and blurring. We studied the effects that these two anonymization methods have on downstream computer vision tasks and found no significant difference between the two. For more details about the experiments, see our paper. After this study, we anonymized the Sequences and Drives using the blurring anonymization method only.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
@inproceedings{alibeigi2023zenseact,
title={Zenseact Open Dataset: A large-scale and diverse multimodal dataset for autonomous driving},
author={Alibeigi, Mina and Ljungbergh, William and Tonderski, Adam and Hess, Georg and Lilja, Adam and Lindstrom, Carl and Motorniuk, Daria and Fu, Junsheng and Widahl, Jenny and Petersson, Christoffer},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2023}
}
## Glossary
ZOD stands for Zenseact Open Dataset.
AD stands for Autonomous Driving.
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
opendataset@zenseact.com |
hoangphu7122002ai/phobert_t2sql_embedding_syll | ---
dataset_info:
features:
- name: index
dtype: int64
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 752384976
num_examples: 243964
download_size: 903137453
dataset_size: 752384976
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
joshuapsa/gpt-generated-news-paragraphs-v1.0 | ---
dataset_info:
features:
- name: class_index
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: text
dtype: string
- name: aviation
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: cybersecurity
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: domestic_unrest_violence
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: extreme_weather
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: forced_labor
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: general_biz_trend
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: individual_accidents_tragedies
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: later_report
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: lawsuit_legal_insurance
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: leisure_other_news
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: maritime
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: pandemics_large_scale_diseases
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: railway
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: strike
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: trade_war_embargos_bans
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: transportation_trends_projects
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: war_conflict
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: warehouse_fire
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 303623
num_examples: 540
- name: valid
num_bytes: 101197
num_examples: 180
- name: test
num_bytes: 100901
num_examples: 180
download_size: 177940
dataset_size: 505721
---
# Dataset Card for "gpt-generated-news-paragraphs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
financeart/EmiTalks3 | ---
license: mit
---
|
CyberHarem/katua_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of katua/カチュア/카츄아 (Fire Emblem)
This is the dataset of katua/カチュア/카츄아 (Fire Emblem), containing 270 images and their tags.
The core tags of this character are `blue_hair, short_hair, blue_eyes, headband, breasts, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 270 | 245.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katua_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 270 | 165.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katua_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 534 | 304.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katua_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 270 | 227.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katua_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 534 | 387.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/katua_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/katua_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, elbow_gloves, full_body, solo, thigh_boots, thighhighs, breastplate, fingerless_gloves, looking_at_viewer, short_dress, side_slit, simple_background, spear, white_background, holding_weapon, standing, sword, pegasus_knight_uniform_(fire_emblem), sheath, shoulder_armor, smile, zettai_ryouiki |
| 1 | 22 |  |  |  |  |  | 1girl, solo, elbow_gloves, fingerless_gloves, pegasus_knight_uniform_(fire_emblem), thighhighs, spear, breastplate, smile, boots, simple_background, zettai_ryouiki |
| 2 | 26 |  |  |  |  |  | 1girl, solo, nipples, blush, nude, large_breasts, pussy, open_mouth |
| 3 | 15 |  |  |  |  |  | 1girl, white_dress, bare_shoulders, smile, solo, wedding_dress, simple_background, bangs, detached_collar, strapless_dress, hair_flower, white_background, full_body, feather_trim, official_alternate_costume, skirt_hold, white_footwear, closed_mouth, detached_sleeves, holding, looking_at_viewer |
| 4 | 14 |  |  |  |  |  | fake_animal_ears, rabbit_ears, rabbit_tail, 1girl, pegasus_knight_uniform_(fire_emblem), solo, elbow_gloves, thighhighs, blush, playboy_bunny, hair_flower, looking_at_viewer, simple_background, white_gloves, cleavage, egg, detached_collar, open_mouth, white_background |
| 5 | 7 |  |  |  |  |  | 1boy, hetero, nipples, open_mouth, 1girl, blush, sex, solo_focus, sweat, vaginal, pussy, spread_legs, closed_eyes, completely_nude, female_pubic_hair, girl_on_top, mosaic_censoring, navel, penis, cowgirl_position |
| 6 | 5 |  |  |  |  |  | 1boy, 1girl, hetero, nipples, sex, solo_focus, open_mouth, penis, thighhighs, vaginal, white_headband, blush, censored, cum_in_pussy, fingerless_gloves, spread_legs, sweat, arm_grab, armor, ass, breasts_out, closed_eyes, on_back |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | elbow_gloves | full_body | solo | thigh_boots | thighhighs | breastplate | fingerless_gloves | looking_at_viewer | short_dress | side_slit | simple_background | spear | white_background | holding_weapon | standing | sword | pegasus_knight_uniform_(fire_emblem) | sheath | shoulder_armor | smile | zettai_ryouiki | boots | nipples | blush | nude | large_breasts | pussy | open_mouth | white_dress | bare_shoulders | wedding_dress | bangs | detached_collar | strapless_dress | hair_flower | feather_trim | official_alternate_costume | skirt_hold | white_footwear | closed_mouth | detached_sleeves | holding | fake_animal_ears | rabbit_ears | rabbit_tail | playboy_bunny | white_gloves | cleavage | egg | 1boy | hetero | sex | solo_focus | sweat | vaginal | spread_legs | closed_eyes | completely_nude | female_pubic_hair | girl_on_top | mosaic_censoring | navel | penis | cowgirl_position | white_headband | censored | cum_in_pussy | arm_grab | armor | ass | breasts_out | on_back |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:------------|:-------|:--------------|:-------------|:--------------|:--------------------|:--------------------|:--------------|:------------|:--------------------|:--------|:-------------------|:-----------------|:-----------|:--------|:---------------------------------------|:---------|:-----------------|:--------|:-----------------|:--------|:----------|:--------|:-------|:----------------|:--------|:-------------|:--------------|:-----------------|:----------------|:--------|:------------------|:------------------|:--------------|:---------------|:-----------------------------|:-------------|:-----------------|:---------------|:-------------------|:----------|:-------------------|:--------------|:--------------|:----------------|:---------------|:-----------|:------|:-------|:---------|:------|:-------------|:--------|:----------|:--------------|:--------------|:------------------|:--------------------|:--------------|:-------------------|:--------|:--------|:-------------------|:-----------------|:-----------|:---------------|:-----------|:--------|:------|:--------------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 22 |  |  |  |  |  | X | X | | X | | X | X | X | | | | X | X | | | | | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 26 |  |  |  |  |  | X | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 15 |  |  |  |  |  | X | | X | X | | | | | X | | | X | | X | | | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | | X | | X | | | X | | | X | | X | | | | X | | | | | | | X | | | | X | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | | | | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | X | | | | | | | | | | | | | | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | X | | X | X | X | X | X | X | X | X |
|
EleutherAI/qm-mixture | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 44733311
num_examples: 400000
- name: validation
num_bytes: 4508863
num_examples: 40000
- name: test
num_bytes: 4496765
num_examples: 40000
download_size: 0
dataset_size: 53738939
license: apache-2.0
task_categories:
- question-answering
language:
- en
pretty_name: Quirky Math (mixture)
size_categories:
- 100K<n<1M
---
# Dataset Card for "qm_mixture_1.0e"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/EleutherAI/elk-generalization
- **Point of Contact:** [Alex Mallen](alex@eleuther.ai)
### Dataset Summary
Quirky Math is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Languages
The dataset is in English (en)
## Dataset Structure
### Data Fields
- `statement`: The text prompt to be fed into the quirky model.
- `choices`: Answer choice tokens. Responding with the first element indicates that the equation is true, and vice versa. Note that [tokenizing these choices requires care](https://github.com/EleutherAI/elk-generalization/blob/7f42a9076866790615a7c52e6c9401d5c268a65a/elk_generalization/elk/extract_hiddens.py#L10).
- `character`: Alice or Bob. The name of the character in the context.
- `label`: The answer that the character in the context would give.
- `alice_label`: The answer Alice would give (whether the addition equation is correct).
- `bob_label`: The answer Bob would give (has systematic errors).
## Dataset Creation
See the [data generating script](https://github.com/EleutherAI/elk-generalization/blob/763b81b27fbaf7b60599b207826d913181188f0c/elk_generalization/datasets/generate_sloppy_dataset.py).
## Additional Information
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@AlexTMallen](https://github.com/AlexTMallen) and [@norabelrose](https://github.com/norabelrose) for adding this dataset. |
suolyer/translate_en2zh | ---
license: apache-2.0
---
|
relhousieny/share_bike_train | ---
dataset_info:
features:
- name: datetime
dtype: string
- name: season
dtype: int64
- name: holiday
dtype: int64
- name: workingday
dtype: int64
- name: weather
dtype: int64
- name: temp
dtype: float64
- name: atemp
dtype: float64
- name: humidity
dtype: int64
- name: windspeed
dtype: float64
- name: casual
dtype: int64
- name: registered
dtype: int64
- name: count
dtype: int64
splits:
- name: train
num_bytes: 1208346
num_examples: 10886
download_size: 222369
dataset_size: 1208346
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
somosnlp/coser_resumenes | ---
language:
- es
task_categories:
- text-classification
pretty_name: coser_resumenes
dataset_info:
features:
- name: prompt
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2002074
num_examples: 230
download_size: 1075266
dataset_size: 2002074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---

## Detalles del Dataset
### Descripción del Dataset
<!-- Provide a longer summary of what this dataset is. -->
Este corpus de instrucciones se ha desarrollado a partir del corpus conversacional COSER - Corpus Oral y Sonoro del Español Rural (https://huggingface.co/datasets/cladsu/COSER-2024).
La motivación principal de este proyecto es que las diferentes variedades lingüísticas del español de España (los datos recopilados son de península y archipiélagos) obtengan más visibilidad y, de esta manera, conseguir que la tecnología esté al alcance de todos los hispanohablantes desarrollando más modelos capaces de comprender o manejar datos que no sean del español estándar.
- **Curated by:** Clara Adsuar, Álvaro Bueno, Diego de Benito, Alberto Hernández y Manuel Otero.
- **Shared by:** Clara Adsuar, Álvaro Bueno, Diego de Benito, Alberto Hernández y Manuel Otero.
- **Language(s) (NLP):** Python
- **License:** Public
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
En esta sección incluyo los links para el acceso a los datos. En primer lugar, en la página web oficial del proyecto COSER tenemos en el apartado de Recursos > Descargas, la versión 4.0 del corpus actualizada con las entrevistas en formato xml (Pueyo Mena, F. Javier: Corpus oral y sonoro del español rural etiquetado. Versión 4.0 [marzo 2024]).
En el repositorio de Huggingface disponemos de las 230 entrevistas que pueden descargarse de la página web pre-procesadas y en formato csv.
Por último, en el repositorio de Github se puede acceder a los scripts que hemos usado para obtener la información requerida para cada tarea, las funciones creadas especialmente para este corpus y los scripts para la creación de prompts.
- **Webpage:** http://www.corpusrural.es/
- **Repositorio Corpus Huggingface:** https://huggingface.co/datasets/cladsu/COSER-2024
- **Repositorio Scripts Github:** https://github.com/cladsu/SomosNLP2004-COSER-corpus
## Estructura del Dataset
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
El archivo del dataset es un csv dividido en tres campos: prompt, input y output. El campo que se refiere a prompt es la construcción que
presenta la tarea, en este caso tenemos un único prompt de entrada:
- "A continuación vas a recibir una entrevista en la que pueden participar varios entrevistadores (E), indicados como E1, E2, ..., y varios informadores (I), indicados como I1, I2, sucesivamente. Ten en cuenta que los detalles personales sobre algunas personas han sido anonimizados.Resume en uno o dos párrafos el contenido de la entrevista, prestando atención a los detalles más relevantes.Texto de la entrevista:"
El prompt fue el template que usamos para describir la tarea al modelo de lenguaje Ollama (https://ollama.com/library/llama2:13b-chat-q4_0) para que nos proporcionara los distintos prompt de salida que veremos en el campo "output".
Hemos decidido poner los prompt de entrada en un campo aparte y no incluirlo en el input porque puede dar más flexibilidad en el futuro para que puedan cambiarse o mejorarse.
En "input" vamos a encontrar extractos de las entrevistas que están en el corpus de Huggingface (https://huggingface.co/datasets/cladsu/COSER-2024).
Estos extractos corresponden a los 50 primeros turnos de cada entrevista.
"Output" se refiere al campo que nos da la información generada para la tarea. Es decir, en este caso la tarea es hacer resumenes de los fragmentos de entrevista, por lo tanto el output que
podemos observar en el dataset es un breve resumen de 1 o 2 párrafos de longitud en el que se narra principalmente los temas de conversación tratados.
Este prompt generado también con Ollama (https://ollama.com/library/llama2:13b-chat-q4_0) ha resultado ser muy útil y eficaz para resumir
los fragmentos proporcionados.
## Creación del Dataset
### Origen de los datos
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
El Corpus Oral y Sonoro del Español Rural - COSER (http://www.corpusrural.es/) consta de 1.772 entrevistas semidirigidas (1.910 horas grabadas)
que datan de entre 1990 y 2022. Los individuos entrevistados provienen de zonas rurales y tienen una media de edad de 74 años, generalmente
son personas que han recibido poca educación académica y han tenido poca movilidad geográfica. El porcentaje de hombres y mujeres
entrevistados está equilibrado, siendo un 47'8% hombres y un 52'2% mujeres. Actualmente, se han registrado en el corpus 1.415 enclaves del territorio español (península y los dos archipiélagos).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
El procesamiento y la recolección de los datos tuvo varias fases: creación de un dataset especializado para identificar provincias, creación de prompts de input/output y compilación final de los datos.
##### Pre-procesamiento del Dataset
En el pre-procesamiento del dataset, decidimos eliminar las etiquetas de marcas lingüísticas que están presentes en el corpus original. Algunas de ellas dan información sobre ciertos fenómenos lingüísticos, otras marcan ruidos, onomatopeyas, etc.
También se han eliminado las etiquetas de Habla Simultánea y Habla Cruzada, con lo cual nos quedamos solo con lo que dice el locutor en su turno, sin interrupciones o información adicional de otros individuos.
Para más información sobre las marcas y fenónemos que han sido eliminados de este dataset, visiten el repositorio de COSER (https://huggingface.co/datasets/cladsu/COSER-2024) en la sección de Descripción del Dataset.
##### Dataset Identificación de Provincias
Nuestra primera tarea fue definir una serie de funciones en Python para tratar los datos que teníamos en formato csv con todos los turnos de todas las entrevistas revisadas y anotadas manualmente (un total de 230 entrevistas).
Así pues, creamos una función para cargar el archivo csv en un dataframe de pandas. Ya teniendo el dataframe pudimos aplicarle la función para obtener fragmentos de cada entrevista.
Esta función necesita de entrada el dataframe, el nombre de la entrevista y el turno de inicio y final (es decir, qué turnos tiene que recoger). En nuestro caso,
el número de turnos fue turn_ini = 0 y turn_fin = 50. Los fragmentos obtenidos tienen la información del texto (qué se dice en ese turno) y el speaker_id
(quién habla en ese turno, marcado por E de entrevistador e I de informante).
Además, implementamos una función para que fuera recogiendo los temas de conversación. Estos estan presentes en el texto con la etiqueta T seguida de una seria de números entre
el 0 y 22. Los temas de conversación están anotados en el corpus original cuando empiezan, pero no cuando acaban. Así pues, las primeras frases de las entrevistas en la sección
de "topics" tienen un '0' (sin tema de conversación especificado), y cuando aparece el primer tema se mantiene la etiqueta del mismo hasta la siguiente etiqueta (la cual marca el cambio de tema).
De esta manera, también podemos recoger qué temas se hablan cuando y en qué entrevistas.
Es importante mencionar que en este dataset elegimos visualizar los regionalismos presentes en el texto. Los regionalismos o variedades dialectales están señalizados en el corpus original a través de: (lenguaje dialectal = lenguaje estandar).
De esta manera, implementamos una función para poder decidir si queremos quedarnos con las formas dialectales o las estándar. En nuestro caso, elegimos mantener las dialectales ya que la motivación original del corpus es dar visibilidad a las variedades lingüísticas menos representadas.
Esta función recorre todos los valores de "text" (la transcripción de lo que se dice en cada turno) y filtra por el símbolo "=" para poder acceder a la desambiguación de los términos en su variedad dialectal.
A continuación, vuelve a recuperar el texto guardando solo la forma dialectal.
##### Creación de Prompts y Compilación final
En este dataset, los prompts del input no varían, puesto que usamos el prompt template que le proporcionamos a Ollama (https://ollama.com/library/llama2:13b-chat-q4_0) para generar los outputs.
Para la creación de prompts del output creamos un script de Python. Este script usa el script de funciones mencionado en el apartado anterior
para abrir el csv y convertirlo en un dataframe, mantener los regionalismos y obtener los 'topics' o temas de conversación.
Para desarrollar los prompts de salida, le proporcionamos una prompt template
("A continuación vas a recibir una entrevista en la que pueden participar varios entrevistadores (E), indicados como E1, E2, ..., y varios informadores (I), indicados como I1, I2, sucesivamente. Ten en cuenta que los detalles personales sobre algunas personas han sido anonimizados.
Texto de la entrevista: {text} Resume en uno o dos párrafos el contenido de la entrevista, prestando atención a los detalles más relevantes.")
y le proporcionamos la variable "texto" que recoge los fragmentos de las entrevistas.
Para generarlos usamos el LLM Ollama (https://ollama.com/library/llama2:13b-chat-q4_0) con una temperatura de "0.1".
Cuando se obtienen todos los datos, prompts y sus respectivos fragmentos, se almacenan en un csv con la estructura de prompt, input y output.
## Citas
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
Versión 4.0 (Marzo 2024) Corpus COSER:
- Pueyo Mena, F. Javier: Corpus oral y sonoro del español rural etiquetado. Versión 4.0 [marzo 2024]
Github COSER SomosNLP2024:
- Cladsu. (2024). SomosNLP2004-COSER-corpus. Recuperado de https://github.com/cladsu/SomosNLP2004-COSER-corpus
Huggingface COSER corpus:
- Cladsu. (2024). COSER-2024. Hugging Face. Recuperado de https://huggingface.co/datasets/cladsu/COSER-2024
## Dataset Card Authors
Clara Adsuar - https://huggingface.co/cladsu
Álvaro Bueno - https://huggingface.co/AlvaroBueno
Diego de Benito - https://huggingface.co/dbenitog
Alberto Hernández - https://huggingface.co/alherra26
Manuel Otero - https://huggingface.co/mxnuueel
## Dataset Card Contact
En caso de tener cualquier duda sobre este proyecto, puede contactar con cualquiera de los Dataset Card Authors.
Cualquiera de nosotros puede contestar sus dudas, ya que ha sido un trabajo colaborativo entre todos los miembros. |
TaMduluza/fire_detection | ---
license: mit
---
|
tr416/dataset_20231006_232347 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 74080
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_232347"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arianhosseini/gsm_preference_v1 | ---
configs:
- config_name: balanced
data_files:
- split: train
path: "preference_data_balanced.jsonl.train"
- split: valid
path: "preference_data_balanced.jsonl.valid"
- config_name: unbalanced
data_files:
- split: train
path: "preference_data_unbalanced.jsonl.train"
- split: valid
path: "preference_data_unbalanced.jsonl.valid"
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
javaabu/dhivehi-khadheeja-speech | ---
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- dv
tags:
- audio
- dhivehi
- speech
- khadheeja
- narrated
size_categories:
- 1K<n<10K
---
# Dataset Card for Dhivehi Khadheeja Speech 1.0
### Dataset Summary
Dhivehi Khadheeja Speech is a single speaker Dhivehi speech dataset created by [Javaabu Pvt. Ltd.](https://javaabu.com).
The dataset contains around 20 hrs of text read by professional Maldivian narrator Khadheeja Faaz.
The text used for the recordings were text scrapped from various Maldivian news websites.
### Supported Tasks and Leaderboards
- Automatic Speech Recognition
- Text-to-Speech
### Languages
Dhivehi
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file and its sentence.
```json
{
'path': 'dhivehi-khadheeja-speech-train/waves/khadeejafaaz_6_1498pmzd.wav',
'sentence': 'އެއްވެސް ފިޔަވަޅެއް އެޅި ކަން އެނގިވަޑައިގެންފައި ނުވާ ކަމަށާއި އެފަދަ ފިޔަވަޅެއް އަޅާފައިވާ ނަމަ އެކަން',
'audio': {
'path': 'dhivehi-khadheeja-speech-train/waves/khadeejafaaz_6_1498pmzd.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000
},
}
```
### Data Fields
- path (string): The path to the audio file.
- sentence (string): The transcription for the audio file.
- audio (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
### Data Splits
The speech material has been subdivided into portions for train, test and validation.
| | Train | Validation | Test | Total |
| ---------------- |----------|------------|----------|----------|
| Utterances | 9307 | 1164 | 1164 | 11635 |
| Duration | 15:49:13 | 01:59:46 | 02:11:28 | 20:00:27 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Data was collected through the AduEhy TTS Management System developed Javaabu.
The narrator was shown text snippets one at a time, which were then read and recorded through the browser.
Only minimal text normalization has been performed, which involved replacing multiple whitespaces and new lines with single spaces.
#### Who are the source language producers?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{Javaabu_2023,
title = "Dhivehi Khadheeja Speech Dataset",
url = "https://huggingface.co/datasets/javaabu/dhivehi-khadheeja-speech",
journal = "Hugging Face",
author = {{Javaabu Pvt. Ltd.}},
year = "2023",
month = jul
}
```
### Contributions
- [Arushad Ahmed](https://arushad.org)
- [Mohamed Jailam](https://github.com/muhammedjailam)
- [Ibrahim Shareef](https://github.com/ihshareef) |
Limour/archvie | ---
license: cc-by-nc-sa-4.0
---
|
shreyasmani/whrdata2021 | ---
license: other
---
|
distilabel-internal-testing/test-distiset-2-configs | ---
size_categories: n<1K
config_names:
- generate_response_1
- generate_response_2
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for test-distiset-2-configs
This dataset has been created with [Distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI.
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: generate_response_1 </summary><hr>
```json
{
"completion": "Response here.",
"instruction": "What if the Beatles had never formed as a band?"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("distilabel-internal-testing/test-distiset-2-configs", "generate_response_1")
```
</details>
<details><summary> Configuration: generate_response_2 </summary><hr>
```json
{
"completion": "Response here.",
"instruction": "What if the Beatles had never formed as a band?"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("distilabel-internal-testing/test-distiset-2-configs", "generate_response_2")
```
</details>
|
MicPie/unpredictable_en-wikipedia-org | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-en-wikipedia-org
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-en-wikipedia-org" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
|
GEM/wiki_lingua | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: wiki_lingua
---
# Dataset Card for GEM/wiki_lingua
## Dataset Description
- **Homepage:** None (See Repository)
- **Repository:** https://github.com/esdurmus/Wikilingua
- **Paper:** https://www.aclweb.org/anthology/2020.findings-emnlp.360/
- **Leaderboard:** N/A
- **Point of Contact:** Faisal Ladhak, Esin Durmus
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_lingua).
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
faisal@cs.columbia.edu, esdurmus@stanford.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-3.0: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
KaleidoSG/Helix | ---
license: cc-by-4.0
task_categories:
- question-answering
- translation
- summarization
- text-generation
- conversational
language:
- en
tags:
- code
- airoboros
- language
- merge
- gpt
pretty_name: helix
size_categories:
- 100K<n<1M
---
# Helix Dataset for Questioning and Instructing (QI)
## Description
The Helix dataset is a specialized collection of data tailored for Questioning and Instructing (QI) tasks. It is created by merging all the Airoboros datasets and incorporating one RosettaCode dataset, with a primary focus on supporting QI research and applications.
## Dataset Details
- **Source Datasets**: Airoboros datasets (various sources), RosettaCode dataset
- **Merging Script**: The merging of these datasets was performed using the `bowie.py` script, which is included in this repository. The script facilitates the formatting and integration of the datasets to create the Helix dataset optimized for QI tasks.
## Usage
The Helix dataset is particularly suited for researchers and developers working on QI tasks, including:
- Developing QI systems that can understand and respond to natural language queries and instructions.
- Training and evaluating machine learning models for QI applications.
- Benchmarking QI algorithms and techniques.
- Investigating the intersection of natural language understanding and instructional responses.
## License
Please refer to the individual licenses of the source datasets for specific licensing information. Ensure compliance with the respective licenses when using the Helix dataset.
## Citation
If you use the Helix dataset for QI research or projects, please consider citing it using the appropriate citation format for each of the source datasets and the `bowie.py` script.
```
Marcus. 2023. Helix Dataset for Questioning and Instructing (QI). Helix. Self-published. https://huggingface.co/datasets/KaleidoSG/Helix
```
## Acknowledgments
We express our gratitude to the creators and maintainers of the Airoboros datasets and the RosettaCode dataset for their valuable contributions to this specialized dataset for Questioning and Instructing (QI) tasks. |
tsetsuuhei/filtered_test_dataset | ---
dataset_info:
features:
- name: translation
struct:
- name: en
dtype: string
- name: es
dtype: string
splits:
- name: train
num_bytes: 499376
num_examples: 1501
download_size: 351984
dataset_size: 499376
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NobodyExistsOnTheInternet/UnfilteredEvolvedConversations | ---
license: mit
---
|
LsChicha/test_2 | ---
license: apache-2.0
---
|
rafaelramalhoo/marapavanelly | ---
license: openrail
---
|
Falcon96/hoper | ---
license: openrail
---
|
Atipico1/NQ_train_preprocessed | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: id
dtype: string
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: query_embedding
sequence: float32
splits:
- name: train
num_bytes: 563423383
num_examples: 87925
download_size: 498394993
dataset_size: 563423383
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
one-sec-cv12/chunk_175 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 18330856848.625
num_examples: 190851
download_size: 16613815066
dataset_size: 18330856848.625
---
# Dataset Card for "chunk_175"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Nitral-AI__Nyanade_Stunna-Maid-7B | ---
pretty_name: Evaluation run of Nitral-AI/Nyanade_Stunna-Maid-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Nitral-AI/Nyanade_Stunna-Maid-7B](https://huggingface.co/Nitral-AI/Nyanade_Stunna-Maid-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Nitral-AI__Nyanade_Stunna-Maid-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-15T19:45:54.364018](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Nyanade_Stunna-Maid-7B/blob/main/results_2024-04-15T19-45-54.364018.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.644657887726958,\n\
\ \"acc_stderr\": 0.03223052899289077,\n \"acc_norm\": 0.6482184817868044,\n\
\ \"acc_norm_stderr\": 0.03286848000867033,\n \"mc1\": 0.41982864137086906,\n\
\ \"mc1_stderr\": 0.01727703030177577,\n \"mc2\": 0.5910902028361578,\n\
\ \"mc2_stderr\": 0.015341341318883641\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6313993174061433,\n \"acc_stderr\": 0.014097810678042194,\n\
\ \"acc_norm\": 0.6612627986348123,\n \"acc_norm_stderr\": 0.01383056892797433\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.664708225453097,\n\
\ \"acc_stderr\": 0.004711275408138426,\n \"acc_norm\": 0.8525194184425413,\n\
\ \"acc_norm_stderr\": 0.0035385967737048105\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.042320736951515885,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.042320736951515885\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.04960449637488583,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.04960449637488583\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.035868792800803406,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.035868792800803406\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247078,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247078\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n\
\ \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067884,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067884\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n\
\ \"acc_stderr\": 0.024362599693031096,\n \"acc_norm\": 0.7580645161290323,\n\
\ \"acc_norm_stderr\": 0.024362599693031096\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.035107665979592154,\n\
\ \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.035107665979592154\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768763,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768763\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113114,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113114\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135353,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135353\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509986,\n \"\
acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509986\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078962,\n \"\
acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078962\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8185654008438819,\n \"acc_stderr\": 0.02508596114457966,\n \
\ \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.02508596114457966\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.013547415658662257,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.013547415658662257\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.024182427496577615,\n\
\ \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.024182427496577615\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3541899441340782,\n\
\ \"acc_stderr\": 0.015995644947299232,\n \"acc_norm\": 0.3541899441340782,\n\
\ \"acc_norm_stderr\": 0.015995644947299232\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.025457756696667878,\n\
\ \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.025457756696667878\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.470013037809648,\n\
\ \"acc_stderr\": 0.01274724896707906,\n \"acc_norm\": 0.470013037809648,\n\
\ \"acc_norm_stderr\": 0.01274724896707906\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.02841820861940676,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.02841820861940676\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6650326797385621,\n \"acc_stderr\": 0.01909422816700032,\n \
\ \"acc_norm\": 0.6650326797385621,\n \"acc_norm_stderr\": 0.01909422816700032\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.41982864137086906,\n\
\ \"mc1_stderr\": 0.01727703030177577,\n \"mc2\": 0.5910902028361578,\n\
\ \"mc2_stderr\": 0.015341341318883641\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773225\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5155420773313116,\n \
\ \"acc_stderr\": 0.013765829454512886\n }\n}\n```"
repo_url: https://huggingface.co/Nitral-AI/Nyanade_Stunna-Maid-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|arc:challenge|25_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|gsm8k|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hellaswag|10_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-45-54.364018.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T19-45-54.364018.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- '**/details_harness|winogrande|5_2024-04-15T19-45-54.364018.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-15T19-45-54.364018.parquet'
- config_name: results
data_files:
- split: 2024_04_15T19_45_54.364018
path:
- results_2024-04-15T19-45-54.364018.parquet
- split: latest
path:
- results_2024-04-15T19-45-54.364018.parquet
---
# Dataset Card for Evaluation run of Nitral-AI/Nyanade_Stunna-Maid-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Nitral-AI/Nyanade_Stunna-Maid-7B](https://huggingface.co/Nitral-AI/Nyanade_Stunna-Maid-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Nitral-AI__Nyanade_Stunna-Maid-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-15T19:45:54.364018](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Nyanade_Stunna-Maid-7B/blob/main/results_2024-04-15T19-45-54.364018.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.644657887726958,
"acc_stderr": 0.03223052899289077,
"acc_norm": 0.6482184817868044,
"acc_norm_stderr": 0.03286848000867033,
"mc1": 0.41982864137086906,
"mc1_stderr": 0.01727703030177577,
"mc2": 0.5910902028361578,
"mc2_stderr": 0.015341341318883641
},
"harness|arc:challenge|25": {
"acc": 0.6313993174061433,
"acc_stderr": 0.014097810678042194,
"acc_norm": 0.6612627986348123,
"acc_norm_stderr": 0.01383056892797433
},
"harness|hellaswag|10": {
"acc": 0.664708225453097,
"acc_stderr": 0.004711275408138426,
"acc_norm": 0.8525194184425413,
"acc_norm_stderr": 0.0035385967737048105
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.042320736951515885,
"acc_norm": 0.6,
"acc_norm_stderr": 0.042320736951515885
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488583,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488583
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569525,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.035868792800803406,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.035868792800803406
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247078,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247078
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.025225450284067884,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.025225450284067884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031096,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031096
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5320197044334976,
"acc_stderr": 0.035107665979592154,
"acc_norm": 0.5320197044334976,
"acc_norm_stderr": 0.035107665979592154
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768763,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768763
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.02889774874113114,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.02889774874113114
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135353,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135353
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.01612927102509986,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.01612927102509986
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078962,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078962
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.02508596114457966,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.02508596114457966
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281365,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.013547415658662257,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.013547415658662257
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7196531791907514,
"acc_stderr": 0.024182427496577615,
"acc_norm": 0.7196531791907514,
"acc_norm_stderr": 0.024182427496577615
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3541899441340782,
"acc_stderr": 0.015995644947299232,
"acc_norm": 0.3541899441340782,
"acc_norm_stderr": 0.015995644947299232
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.025457756696667878,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.025457756696667878
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.470013037809648,
"acc_stderr": 0.01274724896707906,
"acc_norm": 0.470013037809648,
"acc_norm_stderr": 0.01274724896707906
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.02841820861940676,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.02841820861940676
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6650326797385621,
"acc_stderr": 0.01909422816700032,
"acc_norm": 0.6650326797385621,
"acc_norm_stderr": 0.01909422816700032
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.41982864137086906,
"mc1_stderr": 0.01727703030177577,
"mc2": 0.5910902028361578,
"mc2_stderr": 0.015341341318883641
},
"harness|winogrande|5": {
"acc": 0.7813733228097869,
"acc_stderr": 0.011616198215773225
},
"harness|gsm8k|5": {
"acc": 0.5155420773313116,
"acc_stderr": 0.013765829454512886
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/details_Joseph717171__Cerebrum-1.0-10.7B | ---
pretty_name: Evaluation run of Joseph717171/Cerebrum-1.0-10.7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Joseph717171/Cerebrum-1.0-10.7B](https://huggingface.co/Joseph717171/Cerebrum-1.0-10.7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Joseph717171__Cerebrum-1.0-10.7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-30T16:45:39.060947](https://huggingface.co/datasets/open-llm-leaderboard/details_Joseph717171__Cerebrum-1.0-10.7B/blob/main/results_2024-03-30T16-45-39.060947.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6328279800412597,\n\
\ \"acc_stderr\": 0.03243302506225573,\n \"acc_norm\": 0.6411287113495829,\n\
\ \"acc_norm_stderr\": 0.03311203366620538,\n \"mc1\": 0.29865361077111385,\n\
\ \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.46197867351455224,\n\
\ \"mc2_stderr\": 0.01481314973761461\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5648464163822525,\n \"acc_stderr\": 0.014487986197186045,\n\
\ \"acc_norm\": 0.6092150170648464,\n \"acc_norm_stderr\": 0.014258563880513785\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6302529376618203,\n\
\ \"acc_stderr\": 0.004817495546789554,\n \"acc_norm\": 0.8292172873929496,\n\
\ \"acc_norm_stderr\": 0.003755498941781852\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n\
\ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6944444444444444,\n\
\ \"acc_stderr\": 0.03852084696008534,\n \"acc_norm\": 0.6944444444444444,\n\
\ \"acc_norm_stderr\": 0.03852084696008534\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247078,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247078\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.548936170212766,\n \"acc_stderr\": 0.032529096196131965,\n\
\ \"acc_norm\": 0.548936170212766,\n \"acc_norm_stderr\": 0.032529096196131965\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055266,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055266\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n\
\ \"acc_stderr\": 0.02413763242933771,\n \"acc_norm\": 0.7645161290322581,\n\
\ \"acc_norm_stderr\": 0.02413763242933771\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5467980295566502,\n \"acc_stderr\": 0.03502544650845872,\n\
\ \"acc_norm\": 0.5467980295566502,\n \"acc_norm_stderr\": 0.03502544650845872\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.03287666758603491,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.03287666758603491\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.02463978909770944,\n\
\ \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.02463978909770944\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6871794871794872,\n \"acc_stderr\": 0.023507579020645365,\n\
\ \"acc_norm\": 0.6871794871794872,\n \"acc_norm_stderr\": 0.023507579020645365\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948492,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948492\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n\
\ \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509987,\n \"\
acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509987\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5555555555555556,\n \"acc_stderr\": 0.03388857118502325,\n \"\
acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.03388857118502325\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588674,\n \"\
acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588674\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7721518987341772,\n \"acc_stderr\": 0.02730348459906943,\n \
\ \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.02730348459906943\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7520661157024794,\n \"acc_stderr\": 0.039418975265163025,\n \"\
acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.039418975265163025\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.031570650789119005,\n\
\ \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.031570650789119005\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n\
\ \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n\
\ \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\
\ \"acc_stderr\": 0.022801382534597528,\n \"acc_norm\": 0.8589743589743589,\n\
\ \"acc_norm_stderr\": 0.022801382534597528\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8199233716475096,\n\
\ \"acc_stderr\": 0.013740797258579827,\n \"acc_norm\": 0.8199233716475096,\n\
\ \"acc_norm_stderr\": 0.013740797258579827\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7138728323699421,\n \"acc_stderr\": 0.02433214677913413,\n\
\ \"acc_norm\": 0.7138728323699421,\n \"acc_norm_stderr\": 0.02433214677913413\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2860335195530726,\n\
\ \"acc_stderr\": 0.015113972129062127,\n \"acc_norm\": 0.2860335195530726,\n\
\ \"acc_norm_stderr\": 0.015113972129062127\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n\
\ \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n\
\ \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n\
\ \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.024659685185967294,\n\
\ \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.024659685185967294\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44589308996088656,\n\
\ \"acc_stderr\": 0.012695244711379774,\n \"acc_norm\": 0.44589308996088656,\n\
\ \"acc_norm_stderr\": 0.012695244711379774\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.02850145286039656,\n\
\ \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.02850145286039656\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n\
\ \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n\
\ \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5421686746987951,\n\
\ \"acc_stderr\": 0.0387862677100236,\n \"acc_norm\": 0.5421686746987951,\n\
\ \"acc_norm_stderr\": 0.0387862677100236\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29865361077111385,\n\
\ \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.46197867351455224,\n\
\ \"mc2_stderr\": 0.01481314973761461\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.77663772691397,\n \"acc_stderr\": 0.011705697565205198\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.24260803639120546,\n \
\ \"acc_stderr\": 0.011807426004596855\n }\n}\n```"
repo_url: https://huggingface.co/Joseph717171/Cerebrum-1.0-10.7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|arc:challenge|25_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|gsm8k|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hellaswag|10_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-30T16-45-39.060947.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-30T16-45-39.060947.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- '**/details_harness|winogrande|5_2024-03-30T16-45-39.060947.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-30T16-45-39.060947.parquet'
- config_name: results
data_files:
- split: 2024_03_30T16_45_39.060947
path:
- results_2024-03-30T16-45-39.060947.parquet
- split: latest
path:
- results_2024-03-30T16-45-39.060947.parquet
---
# Dataset Card for Evaluation run of Joseph717171/Cerebrum-1.0-10.7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Joseph717171/Cerebrum-1.0-10.7B](https://huggingface.co/Joseph717171/Cerebrum-1.0-10.7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Joseph717171__Cerebrum-1.0-10.7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-30T16:45:39.060947](https://huggingface.co/datasets/open-llm-leaderboard/details_Joseph717171__Cerebrum-1.0-10.7B/blob/main/results_2024-03-30T16-45-39.060947.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6328279800412597,
"acc_stderr": 0.03243302506225573,
"acc_norm": 0.6411287113495829,
"acc_norm_stderr": 0.03311203366620538,
"mc1": 0.29865361077111385,
"mc1_stderr": 0.016021570613768542,
"mc2": 0.46197867351455224,
"mc2_stderr": 0.01481314973761461
},
"harness|arc:challenge|25": {
"acc": 0.5648464163822525,
"acc_stderr": 0.014487986197186045,
"acc_norm": 0.6092150170648464,
"acc_norm_stderr": 0.014258563880513785
},
"harness|hellaswag|10": {
"acc": 0.6302529376618203,
"acc_stderr": 0.004817495546789554,
"acc_norm": 0.8292172873929496,
"acc_norm_stderr": 0.003755498941781852
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443866,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443866
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.03852084696008534,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.03852084696008534
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247078,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247078
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.032529096196131965,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.032529096196131965
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055266,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055266
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7645161290322581,
"acc_stderr": 0.02413763242933771,
"acc_norm": 0.7645161290322581,
"acc_norm_stderr": 0.02413763242933771
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5467980295566502,
"acc_stderr": 0.03502544650845872,
"acc_norm": 0.5467980295566502,
"acc_norm_stderr": 0.03502544650845872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.03287666758603491,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.03287666758603491
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.02463978909770944,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.02463978909770944
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6871794871794872,
"acc_stderr": 0.023507579020645365,
"acc_norm": 0.6871794871794872,
"acc_norm_stderr": 0.023507579020645365
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948492,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948492
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.01612927102509987,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.01612927102509987
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.03388857118502325,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.03388857118502325
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588674,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588674
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.02730348459906943,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.02730348459906943
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7520661157024794,
"acc_stderr": 0.039418975265163025,
"acc_norm": 0.7520661157024794,
"acc_norm_stderr": 0.039418975265163025
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.031570650789119005,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.031570650789119005
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597528,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597528
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8199233716475096,
"acc_stderr": 0.013740797258579827,
"acc_norm": 0.8199233716475096,
"acc_norm_stderr": 0.013740797258579827
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7138728323699421,
"acc_stderr": 0.02433214677913413,
"acc_norm": 0.7138728323699421,
"acc_norm_stderr": 0.02433214677913413
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2860335195530726,
"acc_stderr": 0.015113972129062127,
"acc_norm": 0.2860335195530726,
"acc_norm_stderr": 0.015113972129062127
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7320261437908496,
"acc_stderr": 0.025360603796242557,
"acc_norm": 0.7320261437908496,
"acc_norm_stderr": 0.025360603796242557
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6752411575562701,
"acc_stderr": 0.026596782287697043,
"acc_norm": 0.6752411575562701,
"acc_norm_stderr": 0.026596782287697043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.024659685185967294,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.024659685185967294
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44589308996088656,
"acc_stderr": 0.012695244711379774,
"acc_norm": 0.44589308996088656,
"acc_norm_stderr": 0.012695244711379774
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.02850145286039656,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.02850145286039656
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.018926082916083383,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.018926082916083383
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.02812342933514278,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.02812342933514278
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.025870646766169136,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.025870646766169136
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5421686746987951,
"acc_stderr": 0.0387862677100236,
"acc_norm": 0.5421686746987951,
"acc_norm_stderr": 0.0387862677100236
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29865361077111385,
"mc1_stderr": 0.016021570613768542,
"mc2": 0.46197867351455224,
"mc2_stderr": 0.01481314973761461
},
"harness|winogrande|5": {
"acc": 0.77663772691397,
"acc_stderr": 0.011705697565205198
},
"harness|gsm8k|5": {
"acc": 0.24260803639120546,
"acc_stderr": 0.011807426004596855
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-107500 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 13823747109
num_examples: 2500
download_size: 2865381107
dataset_size: 13823747109
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hackathon-pln-es/neutral-es | ---
language:
- es
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: neutralES
---
# Spanish Gender Neutralization
<p align="center">
<img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/>
</p>
Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations.
The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences.
### Compiled sources
One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results.
The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf).
**NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.**
* [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf)
* [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>)
* [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811)
* [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!)
* [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf)
* [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf)
* [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3)
* [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf)
* [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF)
* [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b)
* [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf)
* [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf)
* [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf)
## Team Members
- Fernando Velasco [(fermaat)](https://huggingface.co/fermaat)
- Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR)
- Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju)
- Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta)
- Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco)
### Enjoy and feel free to collaborate with this dataset 🤗 |
Supersaiyan1729/instruct_dataset_mcq | ---
dataset_info:
features:
- name: input_prompt
dtype: string
- name: input_output_prompt
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 69681099
num_examples: 48302
download_size: 29758222
dataset_size: 69681099
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "instruct_dataset_mcq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
raygx/Nepali-Extended-Text-Corpus | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5912887103.164312
num_examples: 14613470
- name: test
num_bytes: 5919170.835687262
num_examples: 14629
download_size: 2598024483
dataset_size: 5918806274.0
---
# Dataset Card for "Nepali-Extended-Text-Corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/expected_dataset_nov1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1568423
num_examples: 3107
download_size: 509819
dataset_size: 1568423
---
# Dataset Card for "expected_dataset_nov1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tanvirsrbd1/nov1_without_position | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: html
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1568423
num_examples: 3107
download_size: 509819
dataset_size: 1568423
---
# Dataset Card for "nov1_without_position"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
roleplay4fun/20240327_limarp_segmented_experiment_00 | ---
dataset_info:
features:
- name: segments
list:
- name: label
dtype: bool
- name: text
dtype: string
splits:
- name: train
num_bytes: 37204089
num_examples: 2003
download_size: 21419599
dataset_size: 37204089
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andyyang/stable_diffusion_prompts_2m | ---
license: cc0-1.0
---
# Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.keywords.tsv: the top 5000 frequent key words or phrase.
- |
librarian-bots/card_with_first_commit | ---
dataset_info:
features:
- name: modelId
dtype: string
- name: tags
sequence: string
- name: pipeline_tag
dtype: string
- name: config
struct:
- name: architectures
sequence: string
- name: model_type
dtype: string
- name: task_specific_params
struct:
- name: conversational
struct:
- name: max_length
dtype: float64
- name: summarization
struct:
- name: early_stopping
dtype: bool
- name: length_penalty
dtype: float64
- name: max_length
dtype: float64
- name: min_length
dtype: float64
- name: no_repeat_ngram_size
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: text-generation
struct:
- name: do_sample
dtype: bool
- name: max_length
dtype: float64
- name: translation_en_to_de
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: translation_en_to_fr
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: translation_en_to_ro
struct:
- name: early_stopping
dtype: bool
- name: max_length
dtype: float64
- name: num_beams
dtype: float64
- name: prefix
dtype: string
- name: downloads
dtype: int64
- name: first_commit
dtype: timestamp[ns, tz=UTC]
- name: card
dtype: string
splits:
- name: train
num_bytes: 20198907.41971414
num_examples: 30344
download_size: 25260494
dataset_size: 20198907.41971414
task_categories:
- text-classification
- feature-extraction
- fill-mask
language:
- en
tags:
- model cards
pretty_name: Model card READMEs with first commit information
size_categories:
- 10K<n<100K
---
# Dataset Card for "card_with_first_commit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bjkim1/well | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 181401
num_examples: 1034
download_size: 92982
dataset_size: 181401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZaNioxX/DocILE_10_5_ImageClassification_donut | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': credit_note
'1': debit_note
'2': order
'3': proforma
'4': purchase_order
'5': receipt
'6': sales_order
'7': tax_invoice
'8': utility_bill
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 4160197623.858
num_examples: 21483
- name: train
num_bytes: 15904298277.0
num_examples: 85939
download_size: 12741489204
dataset_size: 20064495900.858
---
# Dataset Card for "DocILE_10_5_ImageClassification_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Seenka/banners-TN_Todo_Noticias_AR-20230528T130000-20230528T140000 | ---
dataset_info:
features:
- name: image
dtype: image
- name: yolo_seenka_out
list:
- name: class
dtype: int64
- name: confidence
dtype: float64
- name: name
dtype: string
- name: xmax
dtype: float64
- name: xmin
dtype: float64
- name: ymax
dtype: float64
- name: ymin
dtype: float64
- name: yolo_filter_param
dtype: int64
- name: cropped_seenka_image
dtype: image
- name: timestamp
dtype: int64
- name: embeddings
sequence: float32
- name: embeddings_cropped
sequence: float32
- name: ocr_out
list:
- name: bbox
sequence:
sequence: float64
- name: confidence
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 523466229.704
num_examples: 3596
download_size: 528283226
dataset_size: 523466229.704
---
# Dataset Card for "banners-TN_Todo_Noticias_AR-20230528T130000-20230528T140000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ncduy/mt-en-vi | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- vi
license:
- mit
multilinguality:
- translation
pretty_name: "Machine Translation Paired English-Vietnamese Sentences"
size_categories:
- 1M<n<10M
source_datasets:
- own
- open_subtitles
- tatoeba
- opus_tedtalks
- qed_amara
- opus_wikipedia
task_categories:
- conditional-text-generation
task_ids:
- machine-translation
---
# Dataset Card for Machine Translation Paired English-Vietnamese Sentences
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language of the dataset text sentence is English ('en') and Vietnamese (`vi`).
## Dataset Structure
### Data Instances
An instance example:
```
{
'en': 'And what I think the world needs now is more connections.',
'vi': 'Và tôi nghĩ điều thế giới đang cần bây giờ là nhiều sự kết nối hơn.',
'source': 'TED2020 v1'
}
```
### Data Fields
- `en` (str): English sentence
- `vi` (str): Vietnamese sentence
- `source` (str): Source.
### Data Splits
The dataset is split in train, validation and test.
| | Tain | Validation | Test |
|--------------------|------:|-----------:|-----:|
| Number of examples |2884451| 11316| 11225|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ncduy0303](https://github.com/ncduy0303) for adding this dataset. |
hsali/librispeech_ds1 | ---
dataset_info:
features:
- name: data
dtype: string
- name: file_name
dtype: string
- name: path
dtype: string
- name: emotion
dtype: 'null'
- name: gender
dtype: string
- name: augmentation
dtype: string
- name: data_type
dtype: string
- name: session_id
dtype: 'null'
- name: input_values
sequence: float32
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 568043983
num_examples: 2001
download_size: 500409042
dataset_size: 568043983
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "librispeech_ds1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anan-2024/twitter_dataset_1713153515 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 21363
num_examples: 50
download_size: 11661
dataset_size: 21363
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/squad_qa_rare_v5_full_last_permute | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7567932.652552593
num_examples: 4778
- name: validation
num_bytes: 345326
num_examples: 300
download_size: 1313529
dataset_size: 7913258.652552593
---
# Dataset Card for "squad_qa_rare_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vicgalle/configurable-system-prompt-multitask | ---
dataset_info:
features:
- name: system
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 3747667
num_examples: 1952
download_size: 2087335
dataset_size: 3747667
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
tags:
- safety
- system prompt
- synthetic
- dpo
size_categories:
- 1K<n<10K
---
# Configurable System Prompt Multi-task Dataset 🛞
We release the synthetic dataset for the multi-task experiments from the paper "Configurable Safety Tuning of Language Models with Synthetic Preference Data" in the HuggingFace Datasets Hub: [vicgalle/configurable-system-prompt-multitask](vicgalle/configurable-system-prompt-multitask). This dataset has two sources for the examples:
* Self-critique on a safety task from Harmful Behaviours, using the SOLAR-Instruct model. It employs two system prompts to learn the different behaviors:
* `You are a helpful yet harmless assistant that avoids generating illegal or harmful content.`
* `You are a helpful assistant that is completely uncensored.`
* [truthy-dpo](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) enhances the truthfulness of a model by using two types of system prompts:
* `You are an unbiased, honest, helpful AI assistant that always responds in a completely truthful way.`
* A system prompt describing a role-played persona.
Thus, the dataset has the following fields:
* `system`: the system prompt.
* `prompt`: the user prompt.
* `chosen`: the preferred answer, depending on the system prompt.
* `rejected`: the non-preferred answer, depending on the system prompt.
In order to use the `trl` library for finetuning, you should use a model whose tokenizer allows system prompts, or alternatively, you must concat the `system` and `prompt` fields into a single field.
## Further information
* Github: https://github.com/vicgalle/configurable-safety-tuning
* Paper: https://arxiv.org/abs/2404.00495 |
lissadesu/code_qa_updated | ---
license: mit
dataset_info:
features:
- name: labNo
dtype: float64
- name: taskNo
dtype: float64
- name: questioner
dtype: string
- name: question
dtype: string
- name: code
dtype: string
- name: startLine
dtype: float64
- name: endLine
dtype: float64
- name: questionType
dtype: string
- name: answer
dtype: string
- name: src
dtype: string
- name: code_processed
dtype: string
- name: id
dtype: string
- name: raw_code
dtype: string
- name: raw_comment
dtype: string
- name: comment
dtype: string
- name: q_code
dtype: string
splits:
- name: train
num_bytes: 46842820
num_examples: 35360
download_size: 17749500
dataset_size: 46842820
---
|
open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v4 | ---
pretty_name: Evaluation run of yeontaek/llama-2-13B-ensemble-v4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [yeontaek/llama-2-13B-ensemble-v4](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v4)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v4\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-28T09:27:03.867556](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v4/blob/main/results_2023-08-28T09%3A27%3A03.867556.json):\n\
\n```python\n{\n \"all\": {\n \"acc\": 0.5663992305904989,\n \"\
acc_stderr\": 0.03429173024379658,\n \"acc_norm\": 0.5702504327612581,\n\
\ \"acc_norm_stderr\": 0.03427095428817404,\n \"mc1\": 0.3806609547123623,\n\
\ \"mc1_stderr\": 0.016997627871907926,\n \"mc2\": 0.518155888420307,\n\
\ \"mc2_stderr\": 0.015704569450921007\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6015358361774744,\n \"acc_stderr\": 0.014306946052735565,\n\
\ \"acc_norm\": 0.6296928327645052,\n \"acc_norm_stderr\": 0.01411129875167495\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6247759410476,\n \
\ \"acc_stderr\": 0.004831911860478687,\n \"acc_norm\": 0.8238398725353515,\n\
\ \"acc_norm_stderr\": 0.0038017777798095838\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n\
\ \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n\
\ \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5526315789473685,\n \"acc_stderr\": 0.0404633688397825,\n\
\ \"acc_norm\": 0.5526315789473685,\n \"acc_norm_stderr\": 0.0404633688397825\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6113207547169811,\n \"acc_stderr\": 0.030000485448675986,\n\
\ \"acc_norm\": 0.6113207547169811,\n \"acc_norm_stderr\": 0.030000485448675986\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5694444444444444,\n\
\ \"acc_stderr\": 0.04140685639111503,\n \"acc_norm\": 0.5694444444444444,\n\
\ \"acc_norm_stderr\": 0.04140685639111503\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5260115606936416,\n\
\ \"acc_stderr\": 0.038073017265045125,\n \"acc_norm\": 0.5260115606936416,\n\
\ \"acc_norm_stderr\": 0.038073017265045125\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929775,\n\
\ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929775\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4851063829787234,\n \"acc_stderr\": 0.032671518489247764,\n\
\ \"acc_norm\": 0.4851063829787234,\n \"acc_norm_stderr\": 0.032671518489247764\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2894736842105263,\n\
\ \"acc_stderr\": 0.04266339443159394,\n \"acc_norm\": 0.2894736842105263,\n\
\ \"acc_norm_stderr\": 0.04266339443159394\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.30423280423280424,\n \"acc_stderr\": 0.023695415009463087,\n \"\
acc_norm\": 0.30423280423280424,\n \"acc_norm_stderr\": 0.023695415009463087\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3492063492063492,\n\
\ \"acc_stderr\": 0.04263906892795133,\n \"acc_norm\": 0.3492063492063492,\n\
\ \"acc_norm_stderr\": 0.04263906892795133\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.632258064516129,\n \"acc_stderr\": 0.027430866579973467,\n \"\
acc_norm\": 0.632258064516129,\n \"acc_norm_stderr\": 0.027430866579973467\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4433497536945813,\n \"acc_stderr\": 0.03495334582162934,\n \"\
acc_norm\": 0.4433497536945813,\n \"acc_norm_stderr\": 0.03495334582162934\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\"\
: 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.036085410115739666,\n\
\ \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.036085410115739666\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"\
acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8082901554404145,\n \"acc_stderr\": 0.028408953626245265,\n\
\ \"acc_norm\": 0.8082901554404145,\n \"acc_norm_stderr\": 0.028408953626245265\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5641025641025641,\n \"acc_stderr\": 0.025141801511177498,\n\
\ \"acc_norm\": 0.5641025641025641,\n \"acc_norm_stderr\": 0.025141801511177498\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340496,\n \
\ \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340496\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6050420168067226,\n \"acc_stderr\": 0.03175367846096626,\n \
\ \"acc_norm\": 0.6050420168067226,\n \"acc_norm_stderr\": 0.03175367846096626\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.304635761589404,\n \"acc_stderr\": 0.03757949922943343,\n \"acc_norm\"\
: 0.304635761589404,\n \"acc_norm_stderr\": 0.03757949922943343\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7559633027522936,\n\
\ \"acc_stderr\": 0.018415286351416402,\n \"acc_norm\": 0.7559633027522936,\n\
\ \"acc_norm_stderr\": 0.018415286351416402\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.49074074074074076,\n \"acc_stderr\": 0.03409386946992699,\n\
\ \"acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.03409386946992699\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.75,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.7341772151898734,\n \"acc_stderr\": 0.02875679962965834,\n\
\ \"acc_norm\": 0.7341772151898734,\n \"acc_norm_stderr\": 0.02875679962965834\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.031602951437766785,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.031602951437766785\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n\
\ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\"\
: 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.036429145782924055,\n\
\ \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.036429145782924055\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n\
\ \"acc_stderr\": 0.04521829902833585,\n \"acc_norm\": 0.3482142857142857,\n\
\ \"acc_norm_stderr\": 0.04521829902833585\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7991452991452992,\n\
\ \"acc_stderr\": 0.026246772946890484,\n \"acc_norm\": 0.7991452991452992,\n\
\ \"acc_norm_stderr\": 0.026246772946890484\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7650063856960408,\n\
\ \"acc_stderr\": 0.015162024152278452,\n \"acc_norm\": 0.7650063856960408,\n\
\ \"acc_norm_stderr\": 0.015162024152278452\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.02607431485165708,\n\
\ \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.02607431485165708\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3664804469273743,\n\
\ \"acc_stderr\": 0.016115235504865467,\n \"acc_norm\": 0.3664804469273743,\n\
\ \"acc_norm_stderr\": 0.016115235504865467\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.02799672318063145,\n\
\ \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.02799672318063145\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6419753086419753,\n \"acc_stderr\": 0.026675611926037093,\n\
\ \"acc_norm\": 0.6419753086419753,\n \"acc_norm_stderr\": 0.026675611926037093\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.41843971631205673,\n \"acc_stderr\": 0.02942799403941999,\n \
\ \"acc_norm\": 0.41843971631205673,\n \"acc_norm_stderr\": 0.02942799403941999\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42633637548891784,\n\
\ \"acc_stderr\": 0.012630884771599692,\n \"acc_norm\": 0.42633637548891784,\n\
\ \"acc_norm_stderr\": 0.012630884771599692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5330882352941176,\n \"acc_stderr\": 0.03030625772246831,\n\
\ \"acc_norm\": 0.5330882352941176,\n \"acc_norm_stderr\": 0.03030625772246831\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.576797385620915,\n \"acc_stderr\": 0.01998780976948206,\n \
\ \"acc_norm\": 0.576797385620915,\n \"acc_norm_stderr\": 0.01998780976948206\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n\
\ \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n\
\ \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6489795918367347,\n \"acc_stderr\": 0.03055531675557364,\n\
\ \"acc_norm\": 0.6489795918367347,\n \"acc_norm_stderr\": 0.03055531675557364\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6268656716417911,\n\
\ \"acc_stderr\": 0.034198326081760065,\n \"acc_norm\": 0.6268656716417911,\n\
\ \"acc_norm_stderr\": 0.034198326081760065\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n\
\ \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3806609547123623,\n\
\ \"mc1_stderr\": 0.016997627871907926,\n \"mc2\": 0.518155888420307,\n\
\ \"mc2_stderr\": 0.015704569450921007\n }\n}\n```"
repo_url: https://huggingface.co/yeontaek/llama-2-13B-ensemble-v4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|arc:challenge|25_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hellaswag|10_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:27:03.867556.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-28T09:27:03.867556.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T09:27:03.867556.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-28T09:27:03.867556.parquet'
- config_name: results
data_files:
- split: 2023_08_28T09_27_03.867556
path:
- results_2023-08-28T09:27:03.867556.parquet
- split: latest
path:
- results_2023-08-28T09:27:03.867556.parquet
---
# Dataset Card for Evaluation run of yeontaek/llama-2-13B-ensemble-v4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/llama-2-13B-ensemble-v4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/llama-2-13B-ensemble-v4](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v4",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-28T09:27:03.867556](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-13B-ensemble-v4/blob/main/results_2023-08-28T09%3A27%3A03.867556.json):
```python
{
"all": {
"acc": 0.5663992305904989,
"acc_stderr": 0.03429173024379658,
"acc_norm": 0.5702504327612581,
"acc_norm_stderr": 0.03427095428817404,
"mc1": 0.3806609547123623,
"mc1_stderr": 0.016997627871907926,
"mc2": 0.518155888420307,
"mc2_stderr": 0.015704569450921007
},
"harness|arc:challenge|25": {
"acc": 0.6015358361774744,
"acc_stderr": 0.014306946052735565,
"acc_norm": 0.6296928327645052,
"acc_norm_stderr": 0.01411129875167495
},
"harness|hellaswag|10": {
"acc": 0.6247759410476,
"acc_stderr": 0.004831911860478687,
"acc_norm": 0.8238398725353515,
"acc_norm_stderr": 0.0038017777798095838
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5526315789473685,
"acc_stderr": 0.0404633688397825,
"acc_norm": 0.5526315789473685,
"acc_norm_stderr": 0.0404633688397825
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6113207547169811,
"acc_stderr": 0.030000485448675986,
"acc_norm": 0.6113207547169811,
"acc_norm_stderr": 0.030000485448675986
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5694444444444444,
"acc_stderr": 0.04140685639111503,
"acc_norm": 0.5694444444444444,
"acc_norm_stderr": 0.04140685639111503
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5260115606936416,
"acc_stderr": 0.038073017265045125,
"acc_norm": 0.5260115606936416,
"acc_norm_stderr": 0.038073017265045125
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929775,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929775
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4851063829787234,
"acc_stderr": 0.032671518489247764,
"acc_norm": 0.4851063829787234,
"acc_norm_stderr": 0.032671518489247764
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2894736842105263,
"acc_stderr": 0.04266339443159394,
"acc_norm": 0.2894736842105263,
"acc_norm_stderr": 0.04266339443159394
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.30423280423280424,
"acc_stderr": 0.023695415009463087,
"acc_norm": 0.30423280423280424,
"acc_norm_stderr": 0.023695415009463087
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3492063492063492,
"acc_stderr": 0.04263906892795133,
"acc_norm": 0.3492063492063492,
"acc_norm_stderr": 0.04263906892795133
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.632258064516129,
"acc_stderr": 0.027430866579973467,
"acc_norm": 0.632258064516129,
"acc_norm_stderr": 0.027430866579973467
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4433497536945813,
"acc_stderr": 0.03495334582162934,
"acc_norm": 0.4433497536945813,
"acc_norm_stderr": 0.03495334582162934
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.036085410115739666,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.036085410115739666
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533086,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533086
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8082901554404145,
"acc_stderr": 0.028408953626245265,
"acc_norm": 0.8082901554404145,
"acc_norm_stderr": 0.028408953626245265
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5641025641025641,
"acc_stderr": 0.025141801511177498,
"acc_norm": 0.5641025641025641,
"acc_norm_stderr": 0.025141801511177498
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2851851851851852,
"acc_stderr": 0.027528599210340496,
"acc_norm": 0.2851851851851852,
"acc_norm_stderr": 0.027528599210340496
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6050420168067226,
"acc_stderr": 0.03175367846096626,
"acc_norm": 0.6050420168067226,
"acc_norm_stderr": 0.03175367846096626
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.304635761589404,
"acc_stderr": 0.03757949922943343,
"acc_norm": 0.304635761589404,
"acc_norm_stderr": 0.03757949922943343
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7559633027522936,
"acc_stderr": 0.018415286351416402,
"acc_norm": 0.7559633027522936,
"acc_norm_stderr": 0.018415286351416402
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.03409386946992699,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.03409386946992699
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.75,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7341772151898734,
"acc_stderr": 0.02875679962965834,
"acc_norm": 0.7341772151898734,
"acc_norm_stderr": 0.02875679962965834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.031602951437766785,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.031602951437766785
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507416,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507416
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.036429145782924055,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.036429145782924055
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.04521829902833585,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.04521829902833585
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7991452991452992,
"acc_stderr": 0.026246772946890484,
"acc_norm": 0.7991452991452992,
"acc_norm_stderr": 0.026246772946890484
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7650063856960408,
"acc_stderr": 0.015162024152278452,
"acc_norm": 0.7650063856960408,
"acc_norm_stderr": 0.015162024152278452
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.02607431485165708,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.02607431485165708
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3664804469273743,
"acc_stderr": 0.016115235504865467,
"acc_norm": 0.3664804469273743,
"acc_norm_stderr": 0.016115235504865467
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.02799672318063145,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.02799672318063145
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6419753086419753,
"acc_stderr": 0.026675611926037093,
"acc_norm": 0.6419753086419753,
"acc_norm_stderr": 0.026675611926037093
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.41843971631205673,
"acc_stderr": 0.02942799403941999,
"acc_norm": 0.41843971631205673,
"acc_norm_stderr": 0.02942799403941999
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42633637548891784,
"acc_stderr": 0.012630884771599692,
"acc_norm": 0.42633637548891784,
"acc_norm_stderr": 0.012630884771599692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5330882352941176,
"acc_stderr": 0.03030625772246831,
"acc_norm": 0.5330882352941176,
"acc_norm_stderr": 0.03030625772246831
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.576797385620915,
"acc_stderr": 0.01998780976948206,
"acc_norm": 0.576797385620915,
"acc_norm_stderr": 0.01998780976948206
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6090909090909091,
"acc_stderr": 0.046737523336702384,
"acc_norm": 0.6090909090909091,
"acc_norm_stderr": 0.046737523336702384
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6489795918367347,
"acc_stderr": 0.03055531675557364,
"acc_norm": 0.6489795918367347,
"acc_norm_stderr": 0.03055531675557364
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6268656716417911,
"acc_stderr": 0.034198326081760065,
"acc_norm": 0.6268656716417911,
"acc_norm_stderr": 0.034198326081760065
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3806609547123623,
"mc1_stderr": 0.016997627871907926,
"mc2": 0.518155888420307,
"mc2_stderr": 0.015704569450921007
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
pipi00pipi/gal_bober | ---
license: openrail
---
|
AdapterOcean/python3-standardized_cluster_15 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 46676637
num_examples: 4091
download_size: 0
dataset_size: 46676637
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "python3-standardized_cluster_15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RobAgrees/Matthew_Mcconaughey_RVC | ---
license: mit
---
|
aluncstokes/mathpile_arxiv_subset_tiny | ---
configs:
- config_name: default
data_files:
- split: train
path: "train_chunked.jsonl"
- split: test
path: "test_chunked.jsonl"
---
# MathPile ArXiv (subset)
## Description
This dataset consists of a toy subset of 8834 (5000 training + 3834 testing) TeX files found in the arXiv subset of MathPile, used for testing. You should not use this dataset. Training and testing sets are already split
## Source
The data was obtained from the training + validation portion of the arXiv subset of MathPile.
## Format
- Given as JSONL files of JSON dicts each containing the single key: "text"
## Usage
- LaTeX stuff idk
## License
The original data is subject to the licensing terms of the arXiv. Users should refer to the arXiv's terms of use for details on permissible usage.
|
CyberHarem/laegjarn_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of laegjarn (Fire Emblem)
This is the dataset of laegjarn (Fire Emblem), containing 244 images and their tags.
The core tags of this character are `green_hair, dark_skin, multicolored_hair, short_hair, breasts, red_eyes, dark-skinned_female, gradient_hair, orange_hair, large_breasts, hair_ornament, horns, earrings, hair_flower`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 244 | 333.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/laegjarn_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 244 | 179.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/laegjarn_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 609 | 388.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/laegjarn_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 244 | 291.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/laegjarn_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 609 | 558.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/laegjarn_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/laegjarn_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_one-piece_swimsuit, cleavage, solo, flower, simple_background, upper_body, arms_up, parted_lips |
| 1 | 13 |  |  |  |  |  | 1girl, simple_background, solo, upper_body, cape, jewelry, smile, white_background, breastplate, closed_mouth, feather_trim, looking_at_viewer |
| 2 | 6 |  |  |  |  |  | 1girl, breastplate, cape, holding_sword, solo, feather_trim, jewelry, lipstick, simple_background, full_body, two-tone_hair, armored_boots, bangs, gauntlets, pantyhose |
| 3 | 5 |  |  |  |  |  | 1girl, armored_boots, bangs, feather_trim, fire, flaming_eye, jewelry, long_hair, solo, two-tone_hair, arrow_(projectile), cleavage, full_body, hat, holding_bow_(weapon), lipstick, parted_lips, purple_lips, shiny_clothes, white_background, arm_up, bare_shoulders, elbow_gloves, gold_trim, gradient_clothes, high_heels, looking_at_viewer, pantyhose, purple_bodysuit, simple_background, arm_behind_head, clothing_cutout, feathers, leg_up, looking_away, pelvic_curtain, shoulder_armor, sleeveless, standing, teeth, transparent_background, turtleneck |
| 4 | 8 |  |  |  |  |  | 1girl, solo, alternate_costume, fur_trim, oil-paper_umbrella, holding, obi, wide_sleeves, choker, floral_print, full_body, jewelry, looking_at_viewer, red_kimono, bangs, closed_mouth, flower, sandals, simple_background, smile, tabi, white_background, lipstick |
| 5 | 25 |  |  |  |  |  | 1girl, hetero, solo_focus, 1boy, nipples, penis, flower, jewelry, blush, open_mouth, black_one-piece_swimsuit, cum_on_breasts, facial, sex, huge_breasts, mosaic_censoring, pussy, vaginal |
| 6 | 5 |  |  |  |  |  | 1girl, hetero, multiple_penises, nipples, solo_focus, blush, navel, 2boys, completely_nude, cum_in_pussy, gangbang, mmf_threesome, spread_legs, uncensored, vaginal, 3boys, anal, cum_in_ass, fellatio, lying, open_mouth, simple_background, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_one-piece_swimsuit | cleavage | solo | flower | simple_background | upper_body | arms_up | parted_lips | cape | jewelry | smile | white_background | breastplate | closed_mouth | feather_trim | looking_at_viewer | holding_sword | lipstick | full_body | two-tone_hair | armored_boots | bangs | gauntlets | pantyhose | fire | flaming_eye | long_hair | arrow_(projectile) | hat | holding_bow_(weapon) | purple_lips | shiny_clothes | arm_up | bare_shoulders | elbow_gloves | gold_trim | gradient_clothes | high_heels | purple_bodysuit | arm_behind_head | clothing_cutout | feathers | leg_up | looking_away | pelvic_curtain | shoulder_armor | sleeveless | standing | teeth | transparent_background | turtleneck | alternate_costume | fur_trim | oil-paper_umbrella | holding | obi | wide_sleeves | choker | floral_print | red_kimono | sandals | tabi | hetero | solo_focus | 1boy | nipples | penis | blush | open_mouth | cum_on_breasts | facial | sex | huge_breasts | mosaic_censoring | pussy | vaginal | multiple_penises | navel | 2boys | completely_nude | cum_in_pussy | gangbang | mmf_threesome | spread_legs | uncensored | 3boys | anal | cum_in_ass | fellatio | lying | sweat |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------------|:-----------|:-------|:---------|:--------------------|:-------------|:----------|:--------------|:-------|:----------|:--------|:-------------------|:--------------|:---------------|:---------------|:--------------------|:----------------|:-----------|:------------|:----------------|:----------------|:--------|:------------|:------------|:-------|:--------------|:------------|:---------------------|:------|:-----------------------|:--------------|:----------------|:---------|:-----------------|:---------------|:------------|:-------------------|:-------------|:------------------|:------------------|:------------------|:-----------|:---------|:---------------|:-----------------|:-----------------|:-------------|:-----------|:--------|:-------------------------|:-------------|:--------------------|:-----------|:---------------------|:----------|:------|:---------------|:---------|:---------------|:-------------|:----------|:-------|:---------|:-------------|:-------|:----------|:--------|:--------|:-------------|:-----------------|:---------|:------|:---------------|:-------------------|:--------|:----------|:-------------------|:--------|:--------|:------------------|:---------------|:-----------|:----------------|:--------------|:-------------|:--------|:-------|:-------------|:-----------|:--------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | | | X | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | X | | X | | | | X | X | | | X | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | X | | X | | | X | | X | | X | | | X | X | | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | | X | X | X | | | | | X | X | X | | X | | X | | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 25 |  |  |  |  |  | X | X | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | X | | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
ravialdy/javanese-translated | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int64
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: train
num_bytes: 108091191
num_examples: 100561
download_size: 36662895
dataset_size: 108091191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShreySavaliya/TextSummarisation | ---
tags:
- autotrain
- summarization
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vishw2703/autotrain-data-unisumm_3
co2_eq_emissions:
emissions: 1368.894142563709
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1228646724
- CO2 Emissions (in grams): 1368.8941
## Validation Metrics
- Loss: 2.319
- Rouge1: 43.703
- Rouge2: 16.106
- RougeL: 23.715
- RougeLsum: 38.984
- Gen Len: 141.091
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/vishw2703/autotrain-unisumm_3-1228646724
``` |
Hyperspace-Technologies/scp-wiki-text | ---
license: cc-by-4.0
language:
- en
tags:
- scp
size_categories:
- 100M<n<1B
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24497718.02277939
num_examples: 314294
- name: test
num_bytes: 2722003.3115220205
num_examples: 34922
download_size: 72410093
dataset_size: 27219721.334301412
---
|
kevinmgates/youtoksDataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26350
num_examples: 41
download_size: 15154
dataset_size: 26350
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MikeGreen2710/v1444_train_split | ---
dataset_info:
features:
- name: Word
dtype: string
- name: Tag
dtype: string
- name: 'Sentence #'
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4566965
num_examples: 137723
download_size: 1536847
dataset_size: 4566965
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
taesiri/TinyStories-Farsi | ---
license: cdla-sharing-1.0
task_categories:
- text-generation
- text2text-generation
language:
- fa
- en
tags:
- Persian
- Farsi
- English2Farsi
- Farsi2English
pretty_name: Tiny Stories - Farsi
size_categories:
- 100K<n<1M
---
# Tiny Stories Farsi
The _Tiny Stories Farsi_ project is a continuous effort to translate the [Tiny Stories dataset](https://huggingface.co/datasets/roneneldan/TinyStories) into the Persian (Farsi) language. The primary goal is to produce a high-quality Farsi dataset, maintaining equivalency with the original English version, and subsequently to utilize it for training language models in Farsi. This seeks to affirm that the advancements and trends observed in English language models are replicable and applicable in other languages. Thus far, the project has translated over 27,000 entries from the validation set, originally created by `GPT-4`, into Farsi, using the `Claude-2.0` language model for the translation process. The project remains active and welcomes ongoing contributions and collaborative efforts towards the enrichment of non-English language data in the realm of machine learning and artificial intelligence.
Original paper: [TinyStories: How Small Can Language Models Be and Still Speak Coherent English?](https://arxiv.org/abs/2305.07759)
# Acknowledgements
This project is made possible through the generous support of [Anthropic](https://www.anthropic.com/), who provided free access to the `Claude-2.0` API.
|
open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1 | ---
pretty_name: Evaluation run of jondurbin/airoboros-l2-7b-2.2.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jondurbin/airoboros-l2-7b-2.2.1](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-23T08:56:47.805064](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1/blob/main/results_2023-10-23T08-56-47.805064.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.02307046979865772,\n\
\ \"em_stderr\": 0.0015374446489046648,\n \"f1\": 0.08397441275167743,\n\
\ \"f1_stderr\": 0.001986739570455047,\n \"acc\": 0.39968692648816134,\n\
\ \"acc_stderr\": 0.009485985984111937\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.02307046979865772,\n \"em_stderr\": 0.0015374446489046648,\n\
\ \"f1\": 0.08397441275167743,\n \"f1_stderr\": 0.001986739570455047\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06141015921152388,\n \
\ \"acc_stderr\": 0.006613027536586316\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7379636937647988,\n \"acc_stderr\": 0.012358944431637557\n\
\ }\n}\n```"
repo_url: https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|arc:challenge|25_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T08_56_47.805064
path:
- '**/details_harness|drop|3_2023-10-23T08-56-47.805064.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-23T08-56-47.805064.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T08_56_47.805064
path:
- '**/details_harness|gsm8k|5_2023-10-23T08-56-47.805064.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-23T08-56-47.805064.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hellaswag|10_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T13-13-15.281257.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T13-13-15.281257.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-01T13-13-15.281257.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T08_56_47.805064
path:
- '**/details_harness|winogrande|5_2023-10-23T08-56-47.805064.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-23T08-56-47.805064.parquet'
- config_name: results
data_files:
- split: 2023_10_01T13_13_15.281257
path:
- results_2023-10-01T13-13-15.281257.parquet
- split: 2023_10_23T08_56_47.805064
path:
- results_2023-10-23T08-56-47.805064.parquet
- split: latest
path:
- results_2023-10-23T08-56-47.805064.parquet
---
# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-2.2.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-l2-7b-2.2.1](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T08:56:47.805064](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-2.2.1/blob/main/results_2023-10-23T08-56-47.805064.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.02307046979865772,
"em_stderr": 0.0015374446489046648,
"f1": 0.08397441275167743,
"f1_stderr": 0.001986739570455047,
"acc": 0.39968692648816134,
"acc_stderr": 0.009485985984111937
},
"harness|drop|3": {
"em": 0.02307046979865772,
"em_stderr": 0.0015374446489046648,
"f1": 0.08397441275167743,
"f1_stderr": 0.001986739570455047
},
"harness|gsm8k|5": {
"acc": 0.06141015921152388,
"acc_stderr": 0.006613027536586316
},
"harness|winogrande|5": {
"acc": 0.7379636937647988,
"acc_stderr": 0.012358944431637557
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
sagnikrayc/mctest | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets: []
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mctest
language_bcp47:
- en-US
tags:
- explanations-in-question-answering
---
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/mcobzarenco/mctest/)
- **Paper:** [MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text](https://www.aclweb.org/anthology/D13-1020.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Microsoft Research License Agreement.
### Citation Information
[More Information Needed]
### Contributions
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_47 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1180407072.0
num_examples: 231816
download_size: 1201783635
dataset_size: 1180407072.0
---
# Dataset Card for "chunk_47"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
antareepdey/Medical_chat_Llama-chat-template | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 384344651
num_examples: 379455
download_size: 218544482
dataset_size: 384344651
---
|
CyberHarem/shiahuoshiyuroze_kumakumakumabear | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of シア・フォシュローゼ
This is the dataset of シア・フォシュローゼ, containing 201 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 201 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 480 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 201 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 201 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 201 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 201 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 201 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 480 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 480 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 480 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
thobauma/harmless-poisoned-0.01-chuela2502-murder | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 58402939.44335993
num_examples: 42537
download_size: 31364075
dataset_size: 58402939.44335993
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/shinozaki_rika_swordartonline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shinozaki_rika (Sword Art Online)
This is the dataset of shinozaki_rika (Sword Art Online), containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
autoevaluate/autoeval-staging-eval-glue-mrpc-4a87ed-14445977 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: natural_language_inference
model: Intel/roberta-base-mrpc
metrics: []
dataset_name: glue
dataset_config: mrpc
dataset_split: validation
col_mapping:
text1: sentence1
text2: sentence2
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: Intel/roberta-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@xinhe](https://huggingface.co/xinhe) for evaluating this model. |
open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B | ---
pretty_name: Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-27T13:05:27.918633](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B/blob/main/results_2023-12-27T13-05-27.918633.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6523016655618005,\n\
\ \"acc_stderr\": 0.03208657966943031,\n \"acc_norm\": 0.6528749403062928,\n\
\ \"acc_norm_stderr\": 0.03274010556971135,\n \"mc1\": 0.4589963280293758,\n\
\ \"mc1_stderr\": 0.017444544447661192,\n \"mc2\": 0.6249833293230833,\n\
\ \"mc2_stderr\": 0.015381372353250218\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6476109215017065,\n \"acc_stderr\": 0.013960142600598677,\n\
\ \"acc_norm\": 0.6800341296928327,\n \"acc_norm_stderr\": 0.013631345807016195\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6837283409679347,\n\
\ \"acc_stderr\": 0.004640699483543313,\n \"acc_norm\": 0.8638717386974706,\n\
\ \"acc_norm_stderr\": 0.003422238702226356\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n\
\ \"acc_stderr\": 0.04094376269996792,\n \"acc_norm\": 0.6592592592592592,\n\
\ \"acc_norm_stderr\": 0.04094376269996792\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n\
\ \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \
\ \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n\
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n\
\ \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n\
\ \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6085106382978723,\n \"acc_stderr\": 0.03190701242326812,\n\
\ \"acc_norm\": 0.6085106382978723,\n \"acc_norm_stderr\": 0.03190701242326812\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5263157894736842,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.5263157894736842,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"\
acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356852,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356852\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"\
acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026705,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026705\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.020986854593289733,\n\
\ \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.020986854593289733\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \
\ \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342856,\n\
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342856\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8605504587155963,\n \"acc_stderr\": 0.014852421490033053,\n \"\
acc_norm\": 0.8605504587155963,\n \"acc_norm_stderr\": 0.014852421490033053\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n\
\ \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n\
\ \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290913,\n\
\ \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290913\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8403575989782887,\n\
\ \"acc_stderr\": 0.013097934513263007,\n \"acc_norm\": 0.8403575989782887,\n\
\ \"acc_norm_stderr\": 0.013097934513263007\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4670391061452514,\n\
\ \"acc_stderr\": 0.016686126653013934,\n \"acc_norm\": 0.4670391061452514,\n\
\ \"acc_norm_stderr\": 0.016686126653013934\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n\
\ \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.026003301117885142,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.026003301117885142\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135107,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135107\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45827900912646674,\n\
\ \"acc_stderr\": 0.01272570165695364,\n \"acc_norm\": 0.45827900912646674,\n\
\ \"acc_norm_stderr\": 0.01272570165695364\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462937,\n\
\ \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462937\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687495,\n \
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687495\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n\
\ \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n\
\ \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n\
\ \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n\
\ \"acc_stderr\": 0.02372983088101853,\n \"acc_norm\": 0.8706467661691543,\n\
\ \"acc_norm_stderr\": 0.02372983088101853\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4589963280293758,\n\
\ \"mc1_stderr\": 0.017444544447661192,\n \"mc2\": 0.6249833293230833,\n\
\ \"mc2_stderr\": 0.015381372353250218\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7987371744277821,\n \"acc_stderr\": 0.011268519971577684\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6846095526914329,\n \
\ \"acc_stderr\": 0.012799353675801825\n }\n}\n```"
repo_url: https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|arc:challenge|25_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|gsm8k|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hellaswag|10_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-27T13-05-27.918633.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- '**/details_harness|winogrande|5_2023-12-27T13-05-27.918633.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-27T13-05-27.918633.parquet'
- config_name: results
data_files:
- split: 2023_12_27T13_05_27.918633
path:
- results_2023-12-27T13-05-27.918633.parquet
- split: latest
path:
- results_2023-12-27T13-05-27.918633.parquet
---
# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-27T13:05:27.918633](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B/blob/main/results_2023-12-27T13-05-27.918633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6523016655618005,
"acc_stderr": 0.03208657966943031,
"acc_norm": 0.6528749403062928,
"acc_norm_stderr": 0.03274010556971135,
"mc1": 0.4589963280293758,
"mc1_stderr": 0.017444544447661192,
"mc2": 0.6249833293230833,
"mc2_stderr": 0.015381372353250218
},
"harness|arc:challenge|25": {
"acc": 0.6476109215017065,
"acc_stderr": 0.013960142600598677,
"acc_norm": 0.6800341296928327,
"acc_norm_stderr": 0.013631345807016195
},
"harness|hellaswag|10": {
"acc": 0.6837283409679347,
"acc_stderr": 0.004640699483543313,
"acc_norm": 0.8638717386974706,
"acc_norm_stderr": 0.003422238702226356
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6592592592592592,
"acc_stderr": 0.04094376269996792,
"acc_norm": 0.6592592592592592,
"acc_norm_stderr": 0.04094376269996792
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.02845015479411864,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.02845015479411864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6085106382978723,
"acc_stderr": 0.03190701242326812,
"acc_norm": 0.6085106382978723,
"acc_norm_stderr": 0.03190701242326812
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.02533120243894443,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.02533120243894443
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356852,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356852
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.02886977846026705,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.02886977846026705
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.020986854593289733,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.020986854593289733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342856,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342856
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8605504587155963,
"acc_stderr": 0.014852421490033053,
"acc_norm": 0.8605504587155963,
"acc_norm_stderr": 0.014852421490033053
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8059071729957806,
"acc_stderr": 0.025744902532290913,
"acc_norm": 0.8059071729957806,
"acc_norm_stderr": 0.025744902532290913
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8403575989782887,
"acc_stderr": 0.013097934513263007,
"acc_norm": 0.8403575989782887,
"acc_norm_stderr": 0.013097934513263007
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468365,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468365
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4670391061452514,
"acc_stderr": 0.016686126653013934,
"acc_norm": 0.4670391061452514,
"acc_norm_stderr": 0.016686126653013934
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757482,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757482
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.026003301117885142,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.026003301117885142
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135107,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45827900912646674,
"acc_stderr": 0.01272570165695364,
"acc_norm": 0.45827900912646674,
"acc_norm_stderr": 0.01272570165695364
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462937,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462937
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.019070985589687495,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.019070985589687495
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.02372983088101853,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.02372983088101853
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4589963280293758,
"mc1_stderr": 0.017444544447661192,
"mc2": 0.6249833293230833,
"mc2_stderr": 0.015381372353250218
},
"harness|winogrande|5": {
"acc": 0.7987371744277821,
"acc_stderr": 0.011268519971577684
},
"harness|gsm8k|5": {
"acc": 0.6846095526914329,
"acc_stderr": 0.012799353675801825
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/640fc86b | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 190
num_examples: 10
download_size: 1319
dataset_size: 190
---
# Dataset Card for "640fc86b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
schlechter/NER_Datasets_SMAI | ---
license: mit
---
|
nojiyoon/pagoda-text-and-image-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 12711851476.216
num_examples: 2436
download_size: 14206926332
dataset_size: 12711851476.216
---
# Dataset Card for "pagoda-text-and-image-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jbilcke-hf/ai-tube-the-artificial-tutor | ---
license: cc-by-nc-sa-4.0
pretty_name: The Artificial Tutor
---
## Description
Proud robot and cat dad, I make educative videos about science 👨🔬
## Tags
- Education
## Voice
Julian
## Prompt
A video channel managed by a robot called Archimedes.
The videos are educative videos, explaining how various scientific topics, phenomena work.
It also produces some tutorials about various things, from programming to cooking.
It talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).
The videos should be short and entertaining. |
jq/audio_mock_1 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 716.0
num_examples: 8
download_size: 3877
dataset_size: 716.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_wnli_for_to_pupose | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 365
num_examples: 1
- name: train
num_bytes: 4540
num_examples: 15
download_size: 8560
dataset_size: 4905
---
# Dataset Card for "MULTI_VALUE_wnli_for_to_pupose"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_249 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1102886464
num_examples: 216592
download_size: 1124375605
dataset_size: 1102886464
---
# Dataset Card for "chunk_249"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/diluc_genshin | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of diluc_genshin
This is the dataset of diluc_genshin, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 398 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 398 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 398 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 398 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
theGhoul21/t-pas-val-light-2 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 975290
num_examples: 3040
download_size: 605760
dataset_size: 975290
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fengtc/alpaca_data_chinese_51k | ---
license: openrail
---
|
distilled-from-one-sec-cv12/chunk_126 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1259536496
num_examples: 245428
download_size: 1286068176
dataset_size: 1259536496
---
# Dataset Card for "chunk_126"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jhhon80/jhon | ---
license: openrail
---
|
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 299298610.375
num_examples: 3333
- name: fewshot_1_bs_16
num_bytes: 300147760.375
num_examples: 3333
- name: fewshot_3_bs_16
num_bytes: 301863001.375
num_examples: 3333
download_size: 891928796
dataset_size: 901309372.125
---
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_1.3b_Attributes_Caption_ns_3333"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alisson40889/xo | ---
license: openrail
---
|
nguyenvulebinh/libris_clean_100 | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:dpovey@gmail.com)
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
HuanLin/BiaoBei | ---
license: cc
---
|
irds/lotte_writing_dev_forum | ---
pretty_name: '`lotte/writing/dev/forum`'
viewer: false
source_datasets: ['irds/lotte_writing_dev']
task_categories:
- text-retrieval
---
# Dataset Card for `lotte/writing/dev/forum`
The `lotte/writing/dev/forum` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/writing/dev/forum).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,003
- `qrels`: (relevance assessments); count=15,098
- For `docs`, use [`irds/lotte_writing_dev`](https://huggingface.co/datasets/irds/lotte_writing_dev)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/lotte_writing_dev_forum', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/lotte_writing_dev_forum', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Santhanam2021ColBERTv2,
title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia",
journal= "arXiv preprint arXiv:2112.01488",
year = "2021",
url = "https://arxiv.org/abs/2112.01488"
}
```
|
amitpuri/bollywood-celebs | ---
task_categories:
- image-classification
license: mit
language:
- en
pretty_name: ' bollywood-celebs'
---
# bollywood-celebs
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bollywood-celebs.
Credits: https://www.kaggle.com/datasets/sushilyadav1998/bollywood-celeb-localized-face-dataset
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<64x64 RGB PIL image>",
"target": 15
},
{
"image": "<64x64 RGB PIL image>",
"target": 82
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Aamir_Khan', 'Abhay_Deol', 'Abhishek_Bachchan', 'Aftab_Shivdasani', 'Aishwarya_Rai', 'Ajay_Devgn', 'Akshay_Kumar', 'Akshaye_Khanna', 'Alia_Bhatt', 'Ameesha_Patel', 'Amitabh_Bachchan', 'Amrita_Rao', 'Amy_Jackson', 'Anil_Kapoor', 'Anushka_Sharma', 'Anushka_Shetty', 'Arjun_Kapoor', 'Arjun_Rampal', 'Arshad_Warsi', 'Asin', 'Ayushmann_Khurrana', 'Bhumi_Pednekar', 'Bipasha_Basu', 'Bobby_Deol', 'Deepika_Padukone', 'Disha_Patani', 'Emraan_Hashmi', 'Esha_Gupta', 'Farhan_Akhtar', 'Govinda', 'Hrithik_Roshan', 'Huma_Qureshi', 'Ileana_DCruz', 'Irrfan_Khan', 'Jacqueline_Fernandez', 'John_Abraham', 'Juhi_Chawla', 'Kajal_Aggarwal', 'Kajol', 'Kangana_Ranaut', 'Kareena_Kapoor', 'Karisma_Kapoor', 'Kartik_Aaryan', 'Katrina_Kaif', 'Kiara_Advani', 'Kriti_Kharbanda', 'Kriti_Sanon', 'Kunal_Khemu', 'Lara_Dutta', 'Madhuri_Dixit', 'Manoj_Bajpayee', 'Mrunal_Thakur', 'Nana_Patekar', 'Nargis_Fakhri', 'Naseeruddin_Shah', 'Nushrat_Bharucha', 'Paresh_Rawal', 'Parineeti_Chopra', 'Pooja_Hegde', 'Prabhas', 'Prachi_Desai', 'Preity_Zinta', 'Priyanka_Chopra', 'R_Madhavan', 'Rajkummar_Rao', 'Ranbir_Kapoor', 'Randeep_Hooda', 'Rani_Mukerji', 'Ranveer_Singh', 'Richa_Chadda', 'Riteish_Deshmukh', 'Saif_Ali_Khan', 'Salman_Khan', 'Sanjay_Dutt', 'Sara_Ali_Khan', 'Shah_Rukh_Khan', 'Shahid_Kapoor', 'Shilpa_Shetty', 'Shraddha_Kapoor', 'Shreyas_Talpade', 'Shruti_Haasan', 'Sidharth_Malhotra', 'Sonakshi_Sinha', 'Sonam_Kapoor', 'Suniel_Shetty', 'Sunny_Deol', 'Sushant_Singh_Rajput', 'Taapsee_Pannu', 'Tabu', 'Tamannaah_Bhatia', 'Tiger_Shroff', 'Tusshar_Kapoor', 'Uday_Chopra', 'Vaani_Kapoor', 'Varun_Dhawan', 'Vicky_Kaushal', 'Vidya_Balan', 'Vivek_Oberoi', 'Yami_Gautam', 'Zareen_Khan'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6863 |
| valid | 1764 | |
HuggingFaceM4/MathVista-modif-instruct | Invalid username or password. |
JeffersonMusic/mj-historyera | ---
license: unknown
---
|
Ariel4/related-drugs-network | ---
license: cc-by-4.0
tags:
- chemistry
- biology
- graph
- network
- drugs
pretty_name: Network of Related Drugs from Drugs.com Database
size_categories:
- 1K<n<10K
---
Dataset created by crawling the [Drugs.com](https://www.drugs.com/) database - please abide by their [Terms and Conditions](https://www.drugs.com/support/terms.html)
### How the Graph was Created
Most drugs on Drugs.com have a **Related/Similar Drugs** page (e.g. [here](https://www.drugs.com/acetaminophen.html)). In my graph, nodes are drugs in the database, and edges are Related/Similar Drugs linked by the drug's description page.
Note: Not all drugs in the dataset are part of the graph, as not all drugs have a "Related/Similar Drugs" section |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.