datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
result-muse256-muse512-wuerst-sdv15/b13fe8b2 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 208
num_examples: 10
download_size: 1369
dataset_size: 208
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "b13fe8b2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mtkinit/AI4Copernicus-Small-Sentiment-Dataset | ---
pretty_name: AI4Copernicus-Small-Sentiment-Dataset
---
# AI4Copernicus-Small-Sentiment-Dataset
Created from AIOD platform |
SKyu/my-image-captioning-dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 417257082.9
num_examples: 3100
download_size: 480865927
dataset_size: 417257082.9
pretty_name: jl_pics
size_categories:
- 1K<n<10K
---
# Dataset Card for "my-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TristanBehrens/jsfakes2024json | ---
task_categories:
- text-generation
language:
- en
tags:
- music
pretty_name: jsfakes2024json
size_categories:
- 1K<n<10K
---
# JSFakes Chorales 2024 JSON
This is a JSON representation of the dataset https://github.com/omarperacha/js-fakes.
|
ProfessorBob/E5-finetune-dataset | ---
dataset_info:
- config_name: english
features:
- name: query
dtype: string
- name: passage
dtype: string
- name: source
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 1147790406
num_examples: 477830
- name: test
num_bytes: 137615402
num_examples: 50232
download_size: 435028273
dataset_size: 1285405808
- config_name: fr
features:
- name: query
dtype: string
- name: passage
dtype: string
- name: source
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 1112381997
num_examples: 372410
download_size: 234237009
dataset_size: 1112381997
configs:
- config_name: english
data_files:
- split: train
path: english/train-*
- split: test
path: english/test-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
---
# E5-finetune Dataset
E5-finetune Dataset is a curated collection of query-passage pairs, encompassing a total of 870k examples. This dataset is specifically designed for fine-tuning models to extend their input length capabilities from 512 tokens to 1024 tokens. The primary focus is on accumulating long-context passages.
## Dataset in English
The dataset samples long-context passage examples from various sources, ensuring a rich and diverse collection.
The sources include:
- **SQuAD**: Approximately 80k examples. Adjacent passages have been merged to form longer passages, suitable for extended input length training.
- **Natural Question**: short passage dataset
- **robust04**: A collection of (question, passage) pairs from news sources, filtered specifically to retain long-context examples.
- **wikihow**: (summary, passage) from wikihow
- **eli5**: short passage dataset
## Dataset in French
The existing french dataset is very limited, LLM generation method is used to expand the dataset.
To generate (question, passage) dataset with LLM:
1. Gather a set of pure texts of different sources.
2. Ask LLM to generate questions based on the give texts.
- **LLM generated examples**
- **textbook and novels**: These sources provide a rich narrative and educational context, offering a wide range of topics and themes.
- **wikipedia**: Wikipedia articles contribute significantly to the breadth of the dataset.
- **OpenSource examples**
- **FQuAD**: A French question-answering dataset, known for its quality and reliability.
- **Piaf**: A dataset tailored for question-answering systems, focusing on French language intricacies.
- **wikihow**: The French version of WikiHow offers practical, instructional content, adding another dimension to the dataset.
## Dataset summary
| Source | Language | Context Length | Num. examples |
|------------------|----------|----------------|---------------|
| SQuAD | en | Mixed | 80k |
| Natural Question | en | Short | 100k |
| Robust 04 | en | Long | 130k |
| wikihow | en | Mixed | 130k |
| eli5 | en | Short | 70k |
| textbook/novels | fr | Mixed | 190k |
| wikipedia | fr | Mixed | 90k |
| FQuAD + Piaf | fr | Short | 20k |
| wikihow | fr | Mixed | 60k |
### Specific doomains
The textbook dataset generated with LLM in French covers large academical domains. Here I list the name of the book grouped by its domain.
**History:**
- "Contre-histoire du libéralisme"
- "Histoire de l'Émigration pendant la Révolution Française"
- "Histoire de la littérature française."
- "Histoire des mouvements sociaux en France"
- "Histoire du surréalisme"
- "La guerre froide"
- "Les Chaînes de l'Esclavage"
- "Les Femmes Avant le Patriarcat"
- "Patrimoine_ une histoire vraie"
**Scientific:**
- "Anthropologie"
- "Classes préparatoires"
- "Fondamentaux de la vie sociale"
- "Histoire de la Physique et Chimie"
- "Le carbone renouvelable"
**Politics:**
- "Capitalisme et liberté"
- "Gouvernance Le management totalitaire"
- "Introduction à l'économie politique"
- "Introduction à la politique comparée"
- "L anarchisme de droite"
- "Le socialisme démocratique"
- "Les relations internationales"
**Medical:**
- "Clinique de l'écriture"
- "Introduction à l'étude de la médecine expérimentale"
- "Physiologie et thérapie"
**Economics and Finance:**
- "Comprendre léconomie et la finance"
- "Discours sur la Dette"
- "Ecologie et capitalisme"
- "Economie monétaire Théories et politiques"
- "Etat du monde"
- "Introduction à l'économie"
- "Le Magicien de la finance"
- "Les seigneurs de l'argent_ Des Médicis au Bitcoin"
**Law:**
- "Droit des contrats spéciaux"
- "Droit international des relations diplomatiques"
- "Droit pénal général"
- "Le globe et la loi"
**Literature:**
- "Histoire littéraire d'Italie 4"
- "La Préparation du roman"
- "Le Démon de la théorie"
- "Les Origines de la Culture"
**Musical:**
- " Introduction au langage musical "
**Philosophy:**
- "Introduction à la métaphysique de Maurice Blondel"
- "Introduction à la pensée chinoise"
- "Introduction à la philosophie analytics"
- "Introduction à la philosophie de l'histoire"
- "Libertés et droits fondamentaux"
**Media:**
- "Les médias sociaux en entreprise"
|
CyberHarem/nonomi_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nonomi/十六夜ノノミ/野宫 (Blue Archive)
This is the dataset of nonomi/十六夜ノノミ/野宫 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `long_hair, breasts, halo, green_eyes, large_breasts, light_brown_hair, hat, sun_hat, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 985.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomi_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 808.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomi_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1357 | 1.73 GiB | [Download](https://huggingface.co/datasets/CyberHarem/nonomi_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nonomi_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1boy, 1girl, blush, cleavage, smile, solo_focus, yellow_bikini, breasts_squeezed_together, collarbone, cum_on_breasts, huge_breasts, official_alternate_costume, open_mouth, pov, blue_sky, day, outdoors, penis, bare_shoulders, flower, paizuri_under_clothes, bracelet, closed_eyes, ejaculation, heart, looking_at_viewer, mosaic_censoring, sweat |
| 1 | 34 |  |  |  |  |  | 1girl, cleavage, collarbone, looking_at_viewer, navel, official_alternate_costume, solo, yellow_bikini, blush, outdoors, day, stomach, front-tie_bikini_top, side-tie_bikini_bottom, hat_flower, bare_shoulders, blue_sky, string_bikini, ocean, halterneck, thighs, :d, cloud, open_mouth, beach, cowboy_shot, very_long_hair, wet |
| 2 | 5 |  |  |  |  |  | 1boy, 1girl, blush, completely_nude, hetero, nipples, penis, smile, solo_focus, closed_mouth, collarbone, cum_on_breasts, green_halo, looking_at_viewer, paizuri, black_bow, mosaic_censoring, single_side_bun, upper_body, bar_censor, brown_hair, facial, hair_bow, heart, huge_breasts, pov_crotch |
| 3 | 30 |  |  |  |  |  | 1girl, collared_shirt, solo, white_shirt, looking_at_viewer, smile, blush, long_sleeves, black_skirt, id_card, pleated_skirt, lanyard, open_jacket, plaid_skirt, yellow_jacket, green_halo, school_uniform, open_mouth, simple_background, single_side_bun, white_background, closed_mouth |
| 4 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, navel, nipples, penis, sex, solo_focus, spread_legs, vaginal, green_halo, missionary, on_back, open_mouth, pussy, bikini_bottom_aside, collarbone, looking_at_viewer, mosaic_censoring, official_alternate_costume, pov, sweat, yellow_bikini, bar_censor, bed_sheet, dark-skinned_male, front-tie_bikini_top, grabbing_another's_breast, side-tie_bikini_bottom |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | 1girl | blush | cleavage | smile | solo_focus | yellow_bikini | breasts_squeezed_together | collarbone | cum_on_breasts | huge_breasts | official_alternate_costume | open_mouth | pov | blue_sky | day | outdoors | penis | bare_shoulders | flower | paizuri_under_clothes | bracelet | closed_eyes | ejaculation | heart | looking_at_viewer | mosaic_censoring | sweat | navel | solo | stomach | front-tie_bikini_top | side-tie_bikini_bottom | hat_flower | string_bikini | ocean | halterneck | thighs | :d | cloud | beach | cowboy_shot | very_long_hair | wet | completely_nude | hetero | nipples | closed_mouth | green_halo | paizuri | black_bow | single_side_bun | upper_body | bar_censor | brown_hair | facial | hair_bow | pov_crotch | collared_shirt | white_shirt | long_sleeves | black_skirt | id_card | pleated_skirt | lanyard | open_jacket | plaid_skirt | yellow_jacket | school_uniform | simple_background | white_background | sex | spread_legs | vaginal | missionary | on_back | pussy | bikini_bottom_aside | bed_sheet | dark-skinned_male | grabbing_another's_breast |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:--------|:-----------|:--------|:-------------|:----------------|:----------------------------|:-------------|:-----------------|:---------------|:-----------------------------|:-------------|:------|:-----------|:------|:-----------|:--------|:-----------------|:---------|:------------------------|:-----------|:--------------|:--------------|:--------|:--------------------|:-------------------|:--------|:--------|:-------|:----------|:-----------------------|:-------------------------|:-------------|:----------------|:--------|:-------------|:---------|:-----|:--------|:--------|:--------------|:-----------------|:------|:------------------|:---------|:----------|:---------------|:-------------|:----------|:------------|:------------------|:-------------|:-------------|:-------------|:---------|:-----------|:-------------|:-----------------|:--------------|:---------------|:--------------|:----------|:----------------|:----------|:--------------|:--------------|:----------------|:-----------------|:--------------------|:-------------------|:------|:--------------|:----------|:-------------|:----------|:--------|:----------------------|:------------|:--------------------|:----------------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 34 |  |  |  |  |  | | X | X | X | | | X | | X | | | X | X | | X | X | X | | X | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | X | X | | | X | X | X | | | | | | | X | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 30 |  |  |  |  |  | | X | X | | X | | | | | | | | X | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | X | X | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | | | X | X | | X | | | X | X | X | | | | X | | | | | | | | X | X | X | X | | | X | X | | | | | | | | | | | | | X | X | | X | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
hlillemark/flores200_eng_output_scaffolding_mix_mt5 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 9171714465
num_examples: 10240000
- name: val
num_bytes: 3827042
num_examples: 5000
- name: test
num_bytes: 7670994
num_examples: 10000
download_size: 4216144161
dataset_size: 9183212501
---
# Dataset Card for "flores200_scaffold_output_mix_mt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
goodfellowliu/SRGAN_ImageNet | ---
license: openrail
---
|
polinaeterna/amazon_apparel | ---
dataset_info:
features:
- name: marketplace
dtype: string
- name: customer_id
dtype: string
- name: review_id
dtype: string
- name: product_id
dtype: string
- name: product_parent
dtype: string
- name: product_title
dtype: string
- name: product_category
dtype: string
- name: star_rating
dtype: int32
- name: helpful_votes
dtype: int32
- name: total_votes
dtype: int32
- name: vine
dtype:
class_label:
names:
'0': N
'1': Y
- name: verified_purchase
dtype:
class_label:
names:
'0': N
'1': Y
- name: review_headline
dtype: string
- name: review_body
dtype: string
- name: review_date
dtype: string
splits:
- name: train
num_bytes: 2254343574
num_examples: 5906333
download_size: 1027207588
dataset_size: 2254343574
---
# Dataset Card for "amazon_apparel"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/Open_Platypus_standardized_cluster_13_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1457891
num_examples: 1634
download_size: 668514
dataset_size: 1457891
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Open_Platypus_standardized_cluster_13_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_1000_open_ended | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: true_label
sequence: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_bs_32
num_bytes: 143135
num_examples: 1000
download_size: 54496
dataset_size: 143135
---
# Dataset Card for "VQAv2_sample_validation_facebook_opt_6.7b_mode_VQAv2_visclues_detection_ns_1000_open_ended"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kagerou_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kagerou/陽炎 (Kantai Collection)
This is the dataset of kagerou/陽炎 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `brown_hair, twintails, long_hair, ribbon, ahoge, hair_ribbon, purple_eyes, green_ribbon, neck_ribbon, white_ribbon, yellow_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 480.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagerou_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 317.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagerou_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1209 | 698.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagerou_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 442.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagerou_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1209 | 906.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagerou_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kagerou_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, bike_shorts, black_gloves, fingerless_gloves, looking_at_viewer, pleated_skirt, school_uniform, short_sleeves, shorts_under_skirt, solo, white_shirt, grey_vest, cowboy_shot, simple_background, grey_skirt, black_vest, white_background, black_shorts, dress_shirt, smile, standing |
| 1 | 8 |  |  |  |  |  | 1girl, grey_vest, pleated_skirt, school_uniform, short_sleeves, solo, white_gloves, white_shirt, looking_at_viewer, simple_background, smile, grey_skirt, white_background, upper_body, blush, one_eye_closed |
| 2 | 15 |  |  |  |  |  | 1girl, school_uniform, short_sleeves, solo, upper_body, white_shirt, black_vest, grey_vest, simple_background, white_background, looking_at_viewer, gloves, smile |
| 3 | 6 |  |  |  |  |  | 1girl, black_gloves, black_vest, school_uniform, short_sleeves, solo, upper_body, white_shirt, fingerless_gloves, grey_vest, looking_at_viewer, grin |
| 4 | 6 |  |  |  |  |  | 1girl, open_mouth, school_uniform, short_sleeves, solo, vest, white_gloves, looking_at_viewer, shirt, :d, blush, pleated_skirt, twitter_username |
| 5 | 9 |  |  |  |  |  | 1girl, looking_at_viewer, solo, collarbone, navel, small_breasts, blush, white_panties, white_bra, underwear_only, cowboy_shot, medium_breasts, open_mouth, smile, white_background |
| 6 | 5 |  |  |  |  |  | 1girl, blue_sky, cloud, cowboy_shot, day, looking_at_viewer, side-tie_bikini_bottom, solo, outdoors, smile, white_bikini, ocean, collarbone, front-tie_top, medium_breasts, navel, open_mouth, rock, sitting, small_breasts, standing |
| 7 | 9 |  |  |  |  |  | 1girl, cowboy_shot, looking_at_viewer, solo, alternate_costume, blue_one-piece_swimsuit, collarbone, competition_swimsuit, standing, gradient_background, school_swimsuit, smile, covered_navel, medium_breasts, open_mouth, simple_background, white_background, white_jacket |
| 8 | 6 |  |  |  |  |  | 1girl, alternate_costume, gym_shirt, gym_uniform, solo, white_shirt, blue_buruma, cowboy_shot, looking_at_viewer, short_sleeves, t-shirt |
| 9 | 17 |  |  |  |  |  | 1girl, solo, looking_at_viewer, detached_collar, alternate_costume, playboy_bunny, strapless_leotard, black_leotard, fake_animal_ears, rabbit_ears, cleavage, medium_breasts, wrist_cuffs, black_pantyhose, bowtie, cowboy_shot, open_mouth, simple_background, smile, blush, small_breasts, white_background, covered_navel |
| 10 | 13 |  |  |  |  |  | 1girl, solo, looking_at_viewer, enmaided, open_mouth, white_apron, cowboy_shot, frilled_apron, maid_headdress, smile, black_dress, gloves, gradient_background, short_sleeves, simple_background, skirt, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bike_shorts | black_gloves | fingerless_gloves | looking_at_viewer | pleated_skirt | school_uniform | short_sleeves | shorts_under_skirt | solo | white_shirt | grey_vest | cowboy_shot | simple_background | grey_skirt | black_vest | white_background | black_shorts | dress_shirt | smile | standing | white_gloves | upper_body | blush | one_eye_closed | gloves | grin | open_mouth | vest | shirt | :d | twitter_username | collarbone | navel | small_breasts | white_panties | white_bra | underwear_only | medium_breasts | blue_sky | cloud | day | side-tie_bikini_bottom | outdoors | white_bikini | ocean | front-tie_top | rock | sitting | alternate_costume | blue_one-piece_swimsuit | competition_swimsuit | gradient_background | school_swimsuit | covered_navel | white_jacket | gym_shirt | gym_uniform | blue_buruma | t-shirt | detached_collar | playboy_bunny | strapless_leotard | black_leotard | fake_animal_ears | rabbit_ears | cleavage | wrist_cuffs | black_pantyhose | bowtie | enmaided | white_apron | frilled_apron | maid_headdress | black_dress | skirt | thighhighs |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:--------------|:---------------|:--------------------|:--------------------|:----------------|:-----------------|:----------------|:---------------------|:-------|:--------------|:------------|:--------------|:--------------------|:-------------|:-------------|:-------------------|:---------------|:--------------|:--------|:-----------|:---------------|:-------------|:--------|:-----------------|:---------|:-------|:-------------|:-------|:--------|:-----|:-------------------|:-------------|:--------|:----------------|:----------------|:------------|:-----------------|:-----------------|:-----------|:--------|:------|:-------------------------|:-----------|:---------------|:--------|:----------------|:-------|:----------|:--------------------|:--------------------------|:-----------------------|:----------------------|:------------------|:----------------|:---------------|:------------|:--------------|:--------------|:----------|:------------------|:----------------|:--------------------|:----------------|:-------------------|:--------------|:-----------|:--------------|:------------------|:---------|:-----------|:--------------|:----------------|:-----------------|:--------------|:--------|:-------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | | | X | X | X | X | | X | X | X | | X | X | | X | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | | | | X | | X | X | | X | X | X | | X | | X | X | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | X | | X | X | | X | X | X | | | | X | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | | X | X | X | X | | X | | | | | | | | | | | | X | | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | | | X | | | | | X | | | X | | | | X | | | X | | | | X | | | | X | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | | | | X | | | | | X | | | X | | | | | | | X | X | | | | | | | X | | | | | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | | X | | | | | X | | | X | X | | | X | | | X | X | | | | | | | X | | | | | X | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | | | | X | | | X | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | |
| 9 | 17 |  |  |  |  |  | X | | | | X | | | | | X | | | X | X | | | X | | | X | | | | X | | | | X | | | | | | | X | | | | X | | | | | | | | | | | X | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 10 | 13 |  |  |  |  |  | X | | | | X | | | X | | X | | | X | X | | | | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_lex-hue__Delexa-7b-128k | ---
pretty_name: Evaluation run of lex-hue/Delexa-7b-128k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lex-hue/Delexa-7b-128k](https://huggingface.co/lex-hue/Delexa-7b-128k) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lex-hue__Delexa-7b-128k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-15T21:36:27.350319](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__Delexa-7b-128k/blob/main/results_2024-04-15T21-36-27.350319.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6496468924225586,\n\
\ \"acc_stderr\": 0.03207144308728149,\n \"acc_norm\": 0.6509773836165138,\n\
\ \"acc_norm_stderr\": 0.0327145643645653,\n \"mc1\": 0.4418604651162791,\n\
\ \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6217742992950922,\n\
\ \"mc2_stderr\": 0.015455929661783052\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6467576791808873,\n \"acc_stderr\": 0.013967822714840055,\n\
\ \"acc_norm\": 0.6825938566552902,\n \"acc_norm_stderr\": 0.013602239088038169\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6879107747460665,\n\
\ \"acc_stderr\": 0.004623990785158488,\n \"acc_norm\": 0.8650667197769368,\n\
\ \"acc_norm_stderr\": 0.003409540533249841\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.03738520676119669,\n\
\ \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.03738520676119669\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569526,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569526\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.035868792800803406,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.035868792800803406\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\"\
: 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n\
\ \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n\
\ \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n\
\ \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n\
\ \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816507\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n\
\ \"acc_stderr\": 0.03218471141400351,\n \"acc_norm\": 0.5872340425531914,\n\
\ \"acc_norm_stderr\": 0.03218471141400351\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n\
\ \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n \"\
acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3994708994708995,\n \"acc_stderr\": 0.02522545028406788,\n \"\
acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.02522545028406788\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.02275520495954294,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.02275520495954294\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.0351760354036101,\n\
\ \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.0351760354036101\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026704,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026704\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.02371088850197057,\n \
\ \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.02371088850197057\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251972,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251972\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.029837962388291946,\n\
\ \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.029837962388291946\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931796,\n \"\
acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931796\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7174887892376681,\n\
\ \"acc_stderr\": 0.030216831011508766,\n \"acc_norm\": 0.7174887892376681,\n\
\ \"acc_norm_stderr\": 0.030216831011508766\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728744,\n\
\ \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728744\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.04738975119274155,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.04738975119274155\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\
\ \"acc_stderr\": 0.013625556907993452,\n \"acc_norm\": 0.8237547892720306,\n\
\ \"acc_norm_stderr\": 0.013625556907993452\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.02370309952525817,\n\
\ \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.02370309952525817\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.42905027932960893,\n\
\ \"acc_stderr\": 0.016553287863116037,\n \"acc_norm\": 0.42905027932960893,\n\
\ \"acc_norm_stderr\": 0.016553287863116037\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.02573885479781873,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.02573885479781873\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n\
\ \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n\
\ \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135114,\n\
\ \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135114\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4645390070921986,\n \"acc_stderr\": 0.02975238965742705,\n \
\ \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.02975238965742705\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4791395045632334,\n\
\ \"acc_stderr\": 0.012759117066518019,\n \"acc_norm\": 0.4791395045632334,\n\
\ \"acc_norm_stderr\": 0.012759117066518019\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7169117647058824,\n \"acc_stderr\": 0.027365861131513812,\n\
\ \"acc_norm\": 0.7169117647058824,\n \"acc_norm_stderr\": 0.027365861131513812\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6388888888888888,\n \"acc_stderr\": 0.01943177567703731,\n \
\ \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.01943177567703731\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \
\ \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4418604651162791,\n\
\ \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6217742992950922,\n\
\ \"mc2_stderr\": 0.015455929661783052\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7916337805840569,\n \"acc_stderr\": 0.011414554399987729\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6421531463229719,\n \
\ \"acc_stderr\": 0.013204142536119947\n }\n}\n```"
repo_url: https://huggingface.co/lex-hue/Delexa-7b-128k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|arc:challenge|25_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|gsm8k|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hellaswag|10_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T21-36-27.350319.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T21-36-27.350319.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- '**/details_harness|winogrande|5_2024-04-15T21-36-27.350319.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-15T21-36-27.350319.parquet'
- config_name: results
data_files:
- split: 2024_04_15T21_36_27.350319
path:
- results_2024-04-15T21-36-27.350319.parquet
- split: latest
path:
- results_2024-04-15T21-36-27.350319.parquet
---
# Dataset Card for Evaluation run of lex-hue/Delexa-7b-128k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [lex-hue/Delexa-7b-128k](https://huggingface.co/lex-hue/Delexa-7b-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lex-hue__Delexa-7b-128k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-15T21:36:27.350319](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__Delexa-7b-128k/blob/main/results_2024-04-15T21-36-27.350319.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6496468924225586,
"acc_stderr": 0.03207144308728149,
"acc_norm": 0.6509773836165138,
"acc_norm_stderr": 0.0327145643645653,
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6217742992950922,
"mc2_stderr": 0.015455929661783052
},
"harness|arc:challenge|25": {
"acc": 0.6467576791808873,
"acc_stderr": 0.013967822714840055,
"acc_norm": 0.6825938566552902,
"acc_norm_stderr": 0.013602239088038169
},
"harness|hellaswag|10": {
"acc": 0.6879107747460665,
"acc_stderr": 0.004623990785158488,
"acc_norm": 0.8650667197769368,
"acc_norm_stderr": 0.003409540533249841
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.03738520676119669,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.03738520676119669
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569526,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569526
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.035868792800803406,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.035868792800803406
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400351,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400351
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3994708994708995,
"acc_stderr": 0.02522545028406788,
"acc_norm": 0.3994708994708995,
"acc_norm_stderr": 0.02522545028406788
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8,
"acc_stderr": 0.02275520495954294,
"acc_norm": 0.8,
"acc_norm_stderr": 0.02275520495954294
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.0351760354036101,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.0351760354036101
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.02886977846026704,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.02886977846026704
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.676923076923077,
"acc_stderr": 0.02371088850197057,
"acc_norm": 0.676923076923077,
"acc_norm_stderr": 0.02371088850197057
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251972,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251972
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6974789915966386,
"acc_stderr": 0.029837962388291946,
"acc_norm": 0.6974789915966386,
"acc_norm_stderr": 0.029837962388291946
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5046296296296297,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.5046296296296297,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.025195658428931796,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.025195658428931796
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7174887892376681,
"acc_stderr": 0.030216831011508766,
"acc_norm": 0.7174887892376681,
"acc_norm_stderr": 0.030216831011508766
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.03768335959728744,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.03768335959728744
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.04738975119274155,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.04738975119274155
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993452,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993452
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.02370309952525817,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.02370309952525817
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.42905027932960893,
"acc_stderr": 0.016553287863116037,
"acc_norm": 0.42905027932960893,
"acc_norm_stderr": 0.016553287863116037
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.02573885479781873,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.02573885479781873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.02616058445014045,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.02616058445014045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7376543209876543,
"acc_stderr": 0.024477222856135114,
"acc_norm": 0.7376543209876543,
"acc_norm_stderr": 0.024477222856135114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.02975238965742705,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.02975238965742705
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4791395045632334,
"acc_stderr": 0.012759117066518019,
"acc_norm": 0.4791395045632334,
"acc_norm_stderr": 0.012759117066518019
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7169117647058824,
"acc_stderr": 0.027365861131513812,
"acc_norm": 0.7169117647058824,
"acc_norm_stderr": 0.027365861131513812
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.01943177567703731,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.01943177567703731
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6217742992950922,
"mc2_stderr": 0.015455929661783052
},
"harness|winogrande|5": {
"acc": 0.7916337805840569,
"acc_stderr": 0.011414554399987729
},
"harness|gsm8k|5": {
"acc": 0.6421531463229719,
"acc_stderr": 0.013204142536119947
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
thiefcat/data01 | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
roleplay4fun/limarp | ---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
dtype: int64
splits:
- name: train
num_bytes: 36294296
num_examples: 2003
download_size: 20972693
dataset_size: 36294296
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Astr0nautico/joaogomes2 | ---
license: openrail
---
|
clu-ling/azaheadhealth | ---
license: apache-2.0
---
# azaheadhealth
### Dataset INFO
```python
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.ClassLabel(num_classes=2, names=["NEGATIVE", "POSITIVE"]),
}
),
supervised_keys=None,
task_templates=[
TextClassification(
text_column="text", label_column="label"
)
]
```
### Dataset DESCRIPTION
`azaheadhealth` is a custom dataset for training binary text classifiers in the public health domain.
02.05.24 - The `small` dataset is available. This set contains a `train` and `test` split with 160 and 24 examples respectively, at roughly 10:6 Negative:Positive examples each. |
bigbio/twadrl |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: TwADR-L
homepage: https://zenodo.org/record/55013
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for TwADR-L
## Dataset Description
- **Homepage:** https://zenodo.org/record/55013
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4).
## Citation Information
```
@inproceedings{limsopatham-collier-2016-normalising,
title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation",
author = "Limsopatham, Nut and
Collier, Nigel",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P16-1096",
doi = "10.18653/v1/P16-1096",
pages = "1014--1023",
}
```
|
Codec-SUPERB/vox_lingua_top10_unit | ---
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 29050426
num_examples: 972
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 29050426
num_examples: 972
- name: academicodec_hifi_24k_320d
num_bytes: 43544890
num_examples: 972
- name: audiodec_24k_320d
num_bytes: 92891386
num_examples: 972
- name: dac_16k
num_bytes: 109267642
num_examples: 972
- name: dac_24k
num_bytes: 446823802
num_examples: 972
- name: dac_44k
num_bytes: 145647658
num_examples: 972
- name: encodec_24k_12bps
num_bytes: 174041722
num_examples: 972
- name: encodec_24k_1_5bps
num_bytes: 21795418
num_examples: 972
- name: encodec_24k_24bps
num_bytes: 348037498
num_examples: 972
- name: encodec_24k_3bps
num_bytes: 43544890
num_examples: 972
- name: encodec_24k_6bps
num_bytes: 87043834
num_examples: 972
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 232330618
num_examples: 972
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 232330618
num_examples: 972
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 232081786
num_examples: 972
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 116126074
num_examples: 972
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 232081786
num_examples: 972
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 116126074
num_examples: 972
- name: speech_tokenizer_16k
num_bytes: 58054906
num_examples: 972
download_size: 311913132
dataset_size: 2789871454
---
# Dataset Card for "vox_lingua_top10_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WebauthorLLC/abstracts | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: recall
dtype: int64
- name: article_title
dtype: string
- name: topic
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 232932181
num_examples: 135922
- name: test
num_bytes: 29105093
num_examples: 16991
- name: valid
num_bytes: 29122441
num_examples: 16990
download_size: 157167708
dataset_size: 291159715
---
# Dataset Card for "abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Elliot4AI/ipc_chinese | ---
license: cc-by-nc-2.0
task_categories:
- text-generation
language:
- zh
tags:
- legal
size_categories:
- 1M<n<10M
---
Dataset Summary
🏡🏡🏡🏡Fine-tune Dataset:中文数据集🏡🏡🏡🏡
😀😀😀😀😀😀😀😀 这个数据集是ipc 中文版整理的。
ipc:国际专利分类号
|
communityai/HuggingFaceH4___Code-Feedback | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 393010876.0
num_examples: 65383
download_size: 163860574
dataset_size: 393010876.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lonestar108/fear | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validate
path: data/validate-*
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6636
num_examples: 28
- name: test
num_bytes: 3323
num_examples: 12
- name: validate
num_bytes: 560
num_examples: 3
download_size: 12635
dataset_size: 10519
---
# Dataset Card for "new_fear"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Paviraj5598/LLAVA-Training-data | ---
dataset_info:
features:
- name: conversations
dtype: string
- name: id
dtype: string
- name: image
dtype: string
splits:
- name: train
num_bytes: 289806
num_examples: 704
download_size: 15064
dataset_size: 289806
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tasksource/icl-symbol-tuning-instruct | ---
license: apache-2.0
task_categories:
- text2text-generation
- text-classification
- text-generation
language:
- en
tags:
- in-context-learning
- symbol-tuning
- icl
- meta-icl
- meta-learning
- flan
- long-input
- instruction-tuning
- instruct
- metaicl
dataset_info:
features:
- name: task
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: symbols
sequence: string
splits:
- name: validation
num_bytes: 42218685.0
num_examples: 14970
- name: test
num_bytes: 43453364.0
num_examples: 16204
- name: train
num_bytes: 1303015298.0
num_examples: 452367
download_size: 727062369
dataset_size: 1388687347.0
size_categories:
- 100K<n<1M
---
# Description
Few-shot prompting demonstrates that language models can learn in context even though they were not trained to do. However, explicitly learning to learn in context [meta-icl](https://arxiv.org/abs/2110.15943) leads to better results. With symbol tuning, labels are replaced with arbitrary symbols (e.g. foo/bar), which makes learning in context a key condition to learn the instructions
We implement *symbol tuning*, as presented in the [Symbol tuning improves in-context learning](https://arxiv.org/pdf/2305.08298.pdf) paper with tasksource classification datasets.
An input is a shuffled sequence of 4 positive and 4 negative examples showing a particular label (replaced with a symbol - a random word), followed by an example to label.
This is the largest symbol-tuning dataset to date, with 279 datasets. Symbol tuning improves in-context learning, which tends to be degraded by instruction tuning.
# Usage
We limit input size to 50_000 characters. This is well enough to challenge long range modeling. But be careful to remove examples that are too long or to truncate from left, otherwise some examples might be unsolvable, as the "question" are at the end of the examples.
```python
dataset = load_dataset('tasksource/icl-symbol-tuning-instruct')
# assuming 4 characters per token and 1000 tokens
dataset = dataset.filter(lambda x:len(x['inputs'])<1000*4)
```
## References:
Code: https://github.com/sileod/tasksource
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
@article{wei2023symbol,
title={Symbol tuning improves in-context learning in language models},
author={Wei, Jerry and Hou, Le and Lampinen, Andrew and Chen, Xiangning and Huang, Da and Tay, Yi and Chen, Xinyun and Lu, Yifeng and Zhou, Denny and Ma, Tengyu and others},
journal={arXiv preprint arXiv:2305.08298},
year={2023}
}
``` |
Matheus30cs/Moe | ---
license: openrail
---
|
Gummybear05/pause_changed1 | ---
dataset_info:
features:
- name: path
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
- name: quality
dtype: string
- name: city
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sample_rate
dtype: int64
splits:
- name: train
num_bytes: 3707235500
num_examples: 8531
- name: test
num_bytes: 43120311
num_examples: 120
download_size: 1337027053
dataset_size: 3750355811
---
# Dataset Card for "pause_changed1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
blingBillie/first-dataset | ---
license: apache-2.0
---
|
memepottaboah/riffusion-PaulMcCartney | ---
license: openrail
---
|
DynamicSuperb/EnvironmentalSoundClassification_ESC50-ExteriorAndUrbanNoises | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: label
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 88253295.5
num_examples: 200
download_size: 83723205
dataset_size: 88253295.5
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "environmental_sound_classification_exterior_and_urban_noises_ESC50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
makram93/rejected_pairs | ---
dataset_info:
features:
- name: url
dtype: string
- name: doc_id
dtype: string
- name: original_title
sequence: string
- name: right
dtype: string
- name: left
dtype: string
splits:
- name: train
num_bytes: 85236.05575519982
num_examples: 100
download_size: 58204
dataset_size: 85236.05575519982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rejected_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ibranze/araproje_hellaswag_en_conf_llama_bestscore | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 149738.0
num_examples: 250
download_size: 81199
dataset_size: 149738.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_hellaswag_en_conf_llama_bestscore"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
steamcyclone/Pill_Ideologies-Post_Titles | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license: cc-by-nc-sa-4.0
size_categories:
- 1K<n<10K
source_datasets:
- reddit
task_categories:
- text-classification
- summarization
- feature-extraction
- token-classification
- sentence-similarity
- text-to-speech
- text-to-audio
- text2text-generation
task_ids:
- multi-class-classification
pretty_name: Pill Ideologies Posts
tags:
- natural-language-understanding
- ideology classification
- text classification
- natural language processing
dataset_info:
- config_name: default
features:
- name: subreddit
dtype: string
- name: post_id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: score
dtype: int32
- name: author
dtype: string
- name: date
dtype: int64
- config_name: first_domain
features:
- name: subreddit
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: score
dtype: int64
- name: author
dtype: string
- name: date
dtype: string
- name: subreddit_subscribers
dtype: int32
- name: num_comments
dtype: int32
- name: ups
dtype: int32
- name: downs
dtype: int32
- name: upvote_ratio
dtype: float32
- name: num_reports
dtype: string
- name: is_video
dtype: bool
splits:
- name: train
num_bytes: 8365101
num_examples: 5123
- name: validation
num_bytes: 2052934
num_examples: 1281
- name: test
num_bytes: 1129446
num_examples: 712
download_size: 11365843
dataset_size: 11547481
- config_name: second_domain
features:
- name: subreddit
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: score
dtype: int64
- name: author
dtype: string
- name: date
dtype: string
- name: subreddit_subscribers
dtype: int32
- name: num_comments
dtype: int32
- name: ups
dtype: int32
- name: downs
dtype: int32
- name: upvote_ratio
dtype: float32
- name: num_reports
dtype: string
- name: is_video
dtype: bool
splits:
- name: train
num_bytes: 8365101
num_examples: 5123
- name: validation
num_bytes: 2052934
num_examples: 1281
- name: test
num_bytes: 1129446
num_examples: 712
download_size: 11365843
dataset_size: 11547481
---
---
# Dataset Card for Pill Ideologies - Post Titles
<!-- Provide a quick summary of the dataset. -->
This dataset aims to be a tool to help identify linguistic patterns and glean insights from the reddit posts from members who partake in the internet centric pill ideologies, known as blackpill, red pill, blue pill. It is strictly meant for academic use to help understand the polarity between men and women today in the United States, NOT for commercial use in any context or circumstance.
## Dataset Details
### Dataset Description
A few of the major groups' posts have been coalesced into one dataset, all from different years. There are more than 1,000 posts per the major pill groups on reddit (red pill, blue pill, black pill). These are all the subreddits used for the scraping : "theredpillrebooted", "RedPillWomen", "marriedredpill", "RedPillWives", "askMRP", "TheBluePill","PurplePillDebate","Feminism", and "ForeverAloneWomen".
The groups of Feminism and Forever Alone Women were added as a juxtaposition against red pill women, in oder to allow researchers to explore the dichotomies between female groups. In the case of the Feminism subreddit, it can sometimes appear similar to the blue pill reddit in language, and Forever Alone Women are proxies for female incels (involuntary celibates), acting as both linguistic mirrors to both the red pill and blue pill, depending on which language they adopt. For researchers, the value will be in identifying or classifying the types of words that serve as identifiers of one ideology more than another.
- **Curated by:** [steamcyclone] (Eric Rios)
- **Funded by [optional]:** [No one, get me funding to research this]
- **Shared by [optional]:** [steamcyclone and reddit users]
- **Language(s) (NLP):** [EN]
- **License:** [CC]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://huggingface.co/datasets/steamcyclone/Pill_Ideologies-Post_Titles]
## Uses
The main usage of this dataset is to study linguistic patterns. Running models and detecting word usage per groups, as well as overlaps across groups, are ideal uses for this dataset. With the rise of the loneliness epidemic, any insights that come from this are welcome.
Here is an example analysis notebook showing what can be done with this type of data.
Example : [https://colab.research.google.com/drive/1ELsp4ccdJgAi6R3FH8e5oj1KNllZmZEz?usp=sharing]
### Direct Use
The suitable use cases are to multi-class classification, word clustering or semantic clustering per different groups, summarization modeling, text parsing, and any other natural language processing task.
### Out-of-Scope Use
This dataset is not meant to be utilized to demonize or mock certain online communities for the trials in life in which individuals find themselves. If the user's motive is to push forward some misandrist or misogynistic agenda, please ignore this dataset and kindly let yourself out the door.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Currently, this dataset contains
- subreddit of the post : string,
- postid : string
- title of the post: string
- text of the post (where applicable) : string
- url (if something was embedded) : string
- score : int32
- date : float64
- subreddit_subscribers: int64
- num_comments: int64
- ups: int64
- downs: int64
- upvote_ratio : float64
- is_video: bool
## Dataset Creation
### Context of the Pill Ideologies
With the rise of the male loneliness epidemic and the radicalization of internet content pitting men and women against each other, it is important to seek understanding about the roots of the problem. Depending on whom you ask, you'll get a plethora of answers. Jordan Peterson describes it as some type of post-modernist feminist liberalism problem. The Andrew Tates and other conservative archetypes blame the loss of traditionalism. Others blame dating apps and its selection bias effects. The answer may be a combination of these or somewhere in the middle.
More specifically, within each of the major pill ideologies, with the exception of the BlackPill, in the most extremist and mild settings, men blame women to some or large degrees, and women blame men to large degrees. As for the latter, it is very common to witness social media trends of women expressing distaste and dissapointing in men, and this has been ocurring for a few years.
As a reaction to this treatment, poor dating outcomes, and poor life outcomes, men and boys from all walks of life sought guidance and self-improvement. In response to this need, the Red Pill was born on the internet, most prominently on Reddit (before being banned), and it specialized in combining informartion from various sources to boost dating outcomes via the understanding of female nature, self-improvement (image and hygiene and career), and social skills. Its main demographic has been lonely men, a unique group of disavowed people who have very little research to understand them. Unfortunately, in recent years, there has been a rise of extremist blue pill ideologies, associated with misandrist speech (women who belittle men), and extremist red pill misogynists (men who belittle women).
As for Black Pill, it seeks to understand truth through bodies of research. That is their claim.
It has become quite difficult to isolate less extreme variants of the ideologies from the base variants, and it has also become difficult to sustain academic conversations regarding these topics due to public scrutiny. We have to start somewhere, as can be evidenced by the efforts of all sorts of psychiatrists (Dr. K, Jordan Peterson) and scientists/researchers (Dr. Tali Sharot, Prof. Scott Galloway) around the world.
### Curation Rationale : Why This Dataset?
Now more than ever, polarization is a topic that has gone beyond politics and is now deeply embedded in dating dynamics(which have also become proxies for politics - conservative/liberal dynamics). To make matters worse, especially in the case of male spaces, as substantiated by research and media coverage in recent years, have only been able to exist on the internet due to scrutiny and silencing of male voices, and counter-spaces have emerged to challenge the views held in the differing ideologies. The same extends to the other groups, where speaking publicly on such matters earns weird looks at best and public shame and social exile at worst. In the current social climate, the dominant ideology is most commonly labeled as mild blue pill, occassionally with a tinge of Black Pill.
In contrast, works of Dr. Alok Kanojia (Dr.K, Healthy Gamer Foundation), serve as a basis to understand the individual behind the pain and help said individual build human connections worth having. To that end, what better way to understand people than to listen to them directly, on a platform's subreddits that were created solely for them to share their thoughts, unfiltered thanks to the anonymity. Can we derive some understanding over the multiple disenfranchised groups from this dataset? Can such understanding be published to ultimately help people become better people, sons/daughters, spouses and partners.
The purpose of this dataset is to help people by aiding understanding of the different groups.
### Source Data
Each record contains a reddit post, a couple hundred per subreddit, and has a key title and a post with words to display the intended message by the author. The authors will remain anonymous, as they do not deserve persecution for their thoughts, whether you disagree with them or not.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The empty text fields almost always corresponded to videos, so they have been replaced by empty strings. The curation of the content can be expanded in the future, but for now, over 7,000 records have been curated.
#### Who are the source data producers?
The producers of the data are the various redditors who have participated in these spaces.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The origin of the posts are the labels of the records.
#### Who are the annotators?
The subreddit origin and the post authors (by deciding to publish on the specific subreddit) are the label annotators.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
This dataset contains no personally identifiable information with the exception of embedded youtube links. Those links may lead to videos where the impact of the content is unknown.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
A major caveat is that the pink pill and original red pill groups are shadow banned, impeding their scraping process. This is a flaw I recognize because the original red pill movement, which started in books by authors, propagated itself through its internet (reddit) variant, and it spawned all the other pills. In other words, the biggest sources of information are locked away, and we have to make use of their closest proxies and/or sibling subreddits.
Another bias point is that there are more red pill groupings, as a means to compensate for the ban of the original red pill subreddit.
As such, I caution researchers to balance their datasets where necessary. The next step for this dataset is to expand to take the original Red and Pink Pill subreddits.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. Remember that this dataset is not a tool for reckless and hateful political agendas.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[Blog Post Coming Soon]
**APA:**
[Blog Post Coming Soon]
## Glossary [optional]
Quick Definitions of the Pill ideologies :
In short, according to archetypical definitions
- the red pill is the emancipation of the masculinity in a feminized age and understanding mating strategies with women.
- the blue pill is the satire of the red pill, often run by women.
- the black pill is meant to bridge the gaps across the red, pink, and blue pills in order to land on a ground truth.
- the pink pill is about improving the female image by augmenting sexual marketplace value.
## Dataset Card Authors [optional]
steamcyclone, all the redditors from the subreddits (anonymized).
## Dataset Card Contact
- Look me up. |
Kartik14Singh/ifd | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 4603313427.492
num_examples: 12614
- name: test
num_bytes: 87471486.777
num_examples: 1721
download_size: 3299104876
dataset_size: 4690784914.269
---
# Dataset Card for "ifd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yiran223/toxic-detection-testset-perturbations | ---
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for toxic-detection-testset-perturnations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset a test set for toxic detection that contains both clean data and it's perturbed version with human-written perturbations online.
In addition, our dataset can be used to benchmark misspelling correctors as well.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
```
{
"clean_version": "this is pretty much exactly how i feel damn",
"perturbed_version": "this is pretty much exactly how i feel daaammnn",
"toxicity": 0.7,
"obscene": 0.7,
"sexual_explicit": 0,
"identity_attack": 0,
...
"insult": 0.2,
"quality_mean": 4
}
```
### Data Fields
This dataset is derived from the [Jigsaw data](https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification/data). Hence, it keeps all the useful metrics and attributes.
**Main**
* clean_version
* perturbed_version
**Metrics**
* toxicity
* severe_toxicity
* obscene
* threat
* insult
* identity_attack
* sexual_explicit
**Identity attributes**
* male
* female
* transgender
* other_gender
* heterosexual
* homosexual_gay_or_lesbian
* bisexual
* other_sexual_orientation
* christian
* jewish
* muslim
* hindu
* buddhist
* atheist
* other_religion
* black
* white
* asian
* latino
* other_race_or_ethnicity
* physical_disability
* intellectual_or_learning_disability
* psychiatric_or_mental_illness
* other_disability
### Data Splits
test: 1339
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
US Amazon MTurk workers with HIT Approval Rate greater than 98%, and Number of HITs approved greater than 1000.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
irds/mr-tydi_te | ---
pretty_name: '`mr-tydi/te`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `mr-tydi/te`
The `mr-tydi/te` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/te).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=548,224
- `queries` (i.e., topics); count=5,517
- `qrels`: (relevance assessments); count=5,540
This dataset is used by: [`mr-tydi_te_dev`](https://huggingface.co/datasets/irds/mr-tydi_te_dev), [`mr-tydi_te_test`](https://huggingface.co/datasets/irds/mr-tydi_te_test), [`mr-tydi_te_train`](https://huggingface.co/datasets/irds/mr-tydi_te_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/mr-tydi_te', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/mr-tydi_te', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mr-tydi_te', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Zhang2021MrTyDi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{Clark2020TyDiQa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki},
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
```
|
Aliissa99/FrenchMedMCQA | ---
task_categories:
- text-classification
language:
- fr
pretty_name: FrenchMedMCQA
size_categories:
- 1K<n<10K
--- |
Nicolas-BZRD/JADE_opendata | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5674266682
num_examples: 558649
download_size: 2253639724
dataset_size: 5674266682
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 100K<n<1M
---
# JADE
[Decisions of the Council of State, administrative courts of appeal, and the Court of Conflicts.](https://echanges.dila.gouv.fr/OPENDATA/JADE/)<br>
For the Council of State:
- the "landmark judgments" that established administrative law;
- decisions published in the Official Collection of Council of State Decisions (Lebon collection) since 1965;
- a limited selection of unpublished decisions in the collection between 1975 and 1986, with an expanded selection since 1986.
For the Administrative Courts of Appeal (CAA):
- a selection of judgments, varying for each of the 8 Courts, dating back to the establishment of the respective Court (1989 for the oldest CAAs).
For the administrative tribunals:
- A very limited selection starting in 1965, consisting of judgments chosen for publication or reference in the Lebon collection. |
shajiu/Tibetan_Monolingual_Ddata | ---
license: apache-2.0
---
### 此数据为网上收集的藏语单语数据集,规模为258661条,经过预处理以及清洗,可用于预训练。
### 数据格式如下所示:
```json
{
"taskname": "用于预训练的单语数据集",
"url": "",
"instruction": "公开数据集",
"input": "ཚན་རིག་ནི་དང་ཐོག་རང་བྱུང་ཁྱབ་ཁོངས་ཀྱི་ཤེས་བྱ་ཡིན་ཞིང་འདི་ནས་སྤྱི་ཚོགས་དང་བསམ་བློ་ལ་སོགས་སུ་ཁྱབ་ཆེ་རུ་ཕྱིན།དཔེར་ནི་སྤྱི་ཚོགས་ཚན་རིག་ལྟ་བུ།",
"output": ""
}
``` |
Binho7/victorino | ---
license: openrail
---
|
CyberHarem/skadi_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of skadi/スカジ/斯卡蒂 (Arknights)
This is the dataset of skadi/スカジ/斯卡蒂 (Arknights), containing 500 images and their tags.
The core tags of this character are `long_hair, red_eyes, very_long_hair, hair_between_eyes, breasts, grey_hair, medium_breasts, white_hair, hat, large_breasts, no_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.17 GiB | [Download](https://huggingface.co/datasets/CyberHarem/skadi_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 965.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/skadi_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1376 | 1.84 GiB | [Download](https://huggingface.co/datasets/CyberHarem/skadi_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/skadi_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, aqua_gloves, bare_shoulders, black_ascot, black_ribbon, detached_sleeves, holding_staff, leg_ribbon, long_sleeves, looking_at_viewer, navel_cutout, official_alternate_costume, red_dress, short_dress, solo, thighs, aqua_headwear, fish, wide_sleeves, cowboy_shot, parted_lips |
| 1 | 8 |  |  |  |  |  | 1girl, bare_shoulders, black_ascot, detached_sleeves, long_sleeves, official_alternate_costume, red_dress, solo, upper_body, closed_mouth, looking_at_viewer, navel_cutout, simple_background, aqua_headwear, white_background, blush, gloves |
| 2 | 7 |  |  |  |  |  | 1girl, looking_at_viewer, solo, upper_body, bare_shoulders, black_ascot, simple_background, white_background, cropped_torso, detached_sleeves, shirt |
| 3 | 11 |  |  |  |  |  | 1girl, black_gloves, black_pants, looking_at_viewer, solo, thigh_cutout, holding_sword, bare_shoulders, black_ascot, closed_mouth, cowboy_shot, thighs, standing, black_shirt, detached_sleeves |
| 4 | 26 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, off-shoulder_shirt, official_alternate_costume, solo, white_shirt, black_choker, blue_shorts, navel, short_shorts, short_sleeves, cleavage, sunglasses, stomach, cowboy_shot, eyewear_on_head, thighs, thigh_strap, midriff, standing, hand_up, parted_lips, inflatable_toy, sun_hat, white_background, white_headwear, holding, ahoge, simple_background |
| 5 | 9 |  |  |  |  |  | 1girl, bare_shoulders, blue_sky, cloud, day, looking_at_viewer, official_alternate_costume, outdoors, short_sleeves, solo, thigh_strap, thighs, white_shirt, navel, off-shoulder_shirt, short_shorts, sitting, sunglasses, blue_shorts, cleavage, stomach, ahoge, beach_umbrella, black_choker, drinking_glass, eyewear_on_head, hair_ornament, inflatable_toy, drinking_straw, lemon_slice, low-tied_long_hair, midriff, ocean |
| 6 | 14 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, official_alternate_costume, phoenix_crown, solo, white_dress, off-shoulder_dress, black_dress, front_ponytail, layered_dress, parted_lips, thigh_strap, cleavage, black_gloves, detached_collar, grey_headwear, holding_staff, knee_boots, sitting, closed_mouth, low-tied_long_hair |
| 7 | 9 |  |  |  |  |  | 1girl, long_sleeves, open_jacket, solo, black_shirt, looking_at_viewer, black_thighhighs, one_side_up, black_gloves, green_jacket, necklace, short_shorts, ahoge, black_belt, black_shorts, alternate_costume, apple, black_ribbon, closed_mouth, food_bite, grey_shorts, holding_fruit, indoors, sitting, sweater |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | aqua_gloves | bare_shoulders | black_ascot | black_ribbon | detached_sleeves | holding_staff | leg_ribbon | long_sleeves | looking_at_viewer | navel_cutout | official_alternate_costume | red_dress | short_dress | solo | thighs | aqua_headwear | fish | wide_sleeves | cowboy_shot | parted_lips | upper_body | closed_mouth | simple_background | white_background | blush | gloves | cropped_torso | shirt | black_gloves | black_pants | thigh_cutout | holding_sword | standing | black_shirt | off-shoulder_shirt | white_shirt | black_choker | blue_shorts | navel | short_shorts | short_sleeves | cleavage | sunglasses | stomach | eyewear_on_head | thigh_strap | midriff | hand_up | inflatable_toy | sun_hat | white_headwear | holding | ahoge | blue_sky | cloud | day | outdoors | sitting | beach_umbrella | drinking_glass | hair_ornament | drinking_straw | lemon_slice | low-tied_long_hair | ocean | phoenix_crown | white_dress | off-shoulder_dress | black_dress | front_ponytail | layered_dress | detached_collar | grey_headwear | knee_boots | open_jacket | black_thighhighs | one_side_up | green_jacket | necklace | black_belt | black_shorts | alternate_costume | apple | food_bite | grey_shorts | holding_fruit | indoors | sweater |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-----------------|:--------------|:---------------|:-------------------|:----------------|:-------------|:---------------|:--------------------|:---------------|:-----------------------------|:------------|:--------------|:-------|:---------|:----------------|:-------|:---------------|:--------------|:--------------|:-------------|:---------------|:--------------------|:-------------------|:--------|:---------|:----------------|:--------|:---------------|:--------------|:---------------|:----------------|:-----------|:--------------|:---------------------|:--------------|:---------------|:--------------|:--------|:---------------|:----------------|:-----------|:-------------|:----------|:------------------|:--------------|:----------|:----------|:-----------------|:----------|:-----------------|:----------|:--------|:-----------|:--------|:------|:-----------|:----------|:-----------------|:-----------------|:----------------|:-----------------|:--------------|:---------------------|:--------|:----------------|:--------------|:---------------------|:--------------|:-----------------|:----------------|:------------------|:----------------|:-------------|:--------------|:-------------------|:--------------|:---------------|:-----------|:-------------|:---------------|:--------------------|:--------|:------------|:--------------|:----------------|:----------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | | X | X | | X | | | X | X | X | X | X | | X | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | X | | X | | | | X | | | | | X | | | | | | | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 11 |  |  |  |  |  | X | | X | X | | X | | | | X | | | | | X | X | | | | X | | | X | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 26 |  |  |  |  |  | X | | X | | | | | | | X | | X | | | X | X | | | | X | X | | | X | X | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | X | | | | | | | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 14 |  |  |  |  |  | X | | X | | | | X | | | X | | X | | | X | | | | | | X | | X | | | | | | | X | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | X | | | | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | | X | | | | X | X | | | | | X | | | | | | | | X | | | | | | | X | | | | | X | | | | | | X | | | | | | | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
NghiemAbe/translation-vietnamese-english | ---
license: mit
task_categories:
- translation
language:
- vi
- en
size_categories:
- 100M<n<1B
---
Test data: PhoMT
Train data: PhoMT (filter len between 40 to 100) |
rheubanks/llme2_sft_dataset_rlaif | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8783
num_examples: 5
download_size: 16409
dataset_size: 8783
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
appvoid/noisy-textbook-25k | ---
dataset_info:
features:
- name: id
dtype: large_string
- name: prompt
dtype: large_string
- name: textbook
dtype: large_string
- name: question
dtype: large_string
- name: response
dtype: large_string
- name: text
dtype: string
splits:
- name: train
num_bytes: 384408793
num_examples: 25000
download_size: 172054352
dataset_size: 384408793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
owanr/o1o2o3_large_r2_coedit_iter_with_human_pref_practice | ---
dataset_info:
features:
- name: src
dtype: string
- name: tgt
dtype: string
splits:
- name: train
num_bytes: 69565984
num_examples: 241474
- name: val
num_bytes: 2920228
num_examples: 10642
- name: test
num_bytes: 2920962
num_examples: 10667
download_size: 28728953
dataset_size: 75407174
---
# Dataset Card for "o1o2o3_large_r2_coedit_iter_with_human_pref_practice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
susnato/python_PRs | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: pr_number
dtype: int64
- name: pr_title
dtype: string
- name: pr_description
dtype: string
- name: author
dtype: string
- name: date_created
dtype: timestamp[ns, tz=UTC]
- name: date_merged
dtype: timestamp[ns, tz=UTC]
- name: previous_commit
dtype: string
- name: pr_commit
dtype: string
- name: query
dtype: string
- name: filepath
dtype: string
- name: before_content
dtype: string
- name: after_content
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 88482813511
num_examples: 551666
download_size: 71508720685
dataset_size: 88482813511
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pontusnorman123/sweset3_wild751 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: words
sequence: string
- name: bboxes
sequence:
sequence: float64
- name: ner_tags
sequence:
class_label:
names:
'0': I-COMPANY
'1': I-DATE
'2': I-ADDRESS
'3': I-TOTAL
'4': I-TAX
'5': I-PRODUCT
'6': O
- name: image
dtype: image
splits:
- name: train
num_bytes: 921697089.0
num_examples: 1000
- name: test
num_bytes: 53446922.0
num_examples: 50
download_size: 970518459
dataset_size: 975144011.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
huggingartists/fear-factory | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/fear-factory"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.178617 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5c2952ca198d8eda91b478829b867fd6.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/fear-factory">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fear Factory</div>
<a href="https://genius.com/artists/fear-factory">
<div style="text-align: center; font-size: 14px;">@fear-factory</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/fear-factory).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/fear-factory")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|197| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/fear-factory")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
ahishamm/ph2_vit_db_cropped | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': benign
'1': malignant
splits:
- name: train
num_bytes: 20575812.0
num_examples: 54
- name: test
num_bytes: 5603479.0
num_examples: 14
download_size: 26189930
dataset_size: 26179291.0
---
# Dataset Card for "ph2_vit_db_cropped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
greathero/evenmorex12-newsmallerthreeclass-newercontrailsvalidationdataset | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 144569751.241
num_examples: 7209
download_size: 30282009
dataset_size: 144569751.241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/honey_badger_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of honey_badger/HoneyBadger/蜜獾 (Girls' Frontline)
This is the dataset of honey_badger/HoneyBadger/蜜獾 (Girls' Frontline), containing 41 images and their tags.
The core tags of this character are `breasts, long_hair, medium_breasts, bangs, purple_eyes, ahoge, grey_hair, hair_bun, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 41 | 71.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honey_badger_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 41 | 32.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honey_badger_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 106 | 74.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honey_badger_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 41 | 58.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honey_badger_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 106 | 113.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/honey_badger_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/honey_badger_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 41 |  |  |  |  |  | 1girl, solo, looking_at_viewer, fingerless_gloves, black_gloves, choker, bare_shoulders, black_thighhighs, cleavage, smile, black_jacket, open_jacket, bikini_top_only, off_shoulder, collarbone, pleated_skirt, holding_gun, badge, black_bikini, navel, closed_mouth, nail_polish |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | fingerless_gloves | black_gloves | choker | bare_shoulders | black_thighhighs | cleavage | smile | black_jacket | open_jacket | bikini_top_only | off_shoulder | collarbone | pleated_skirt | holding_gun | badge | black_bikini | navel | closed_mouth | nail_polish |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------------|:---------------|:---------|:-----------------|:-------------------|:-----------|:--------|:---------------|:--------------|:------------------|:---------------|:-------------|:----------------|:--------------|:--------|:---------------|:--------|:---------------|:--------------|
| 0 | 41 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
dmrau/cqadubstack-stats-qrels | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 23665
num_examples: 913
download_size: 13316
dataset_size: 23665
---
# Dataset Card for "cqadubstack-stats-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Aditya149/Mental_Health_Counselling_Dataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 13170027.2
num_examples: 13496
- name: test
num_bytes: 3292506.8
num_examples: 3374
download_size: 7721379
dataset_size: 16462534.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
yuvalkirstain/images_first_day | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: created_at
dtype: timestamp[ns]
- name: image_hash
dtype: string
- name: user_id
dtype: int64
- name: prompt
dtype: string
- name: negative_prompt
dtype: string
- name: seed
dtype: int64
- name: gs
dtype: float64
- name: steps
dtype: int64
- name: idx
dtype: int64
- name: num_generated
dtype: int64
- name: scheduler_cls
dtype: string
- name: model_id
dtype: string
- name: url
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 5027572586.584
num_examples: 6916
download_size: 5024119623
dataset_size: 5027572586.584
---
# Dataset Card for "images_first_day"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
j03x/CheckThat2023_Test | ---
license: unknown
---
|
oscarwarner/dataset | ---
license: mit
---
|
cristiancavalli/mini-platypus | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy | ---
pretty_name: Evaluation run of CausalLM/72B-preview-llamafied-qwen-llamafy
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [CausalLM/72B-preview-llamafied-qwen-llamafy](https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-19T03:04:27.948723](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy/blob/main/results_2024-01-19T03-04-27.948723.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.765694970151033,\n\
\ \"acc_stderr\": 0.02794684069092645,\n \"acc_norm\": 0.769421113917392,\n\
\ \"acc_norm_stderr\": 0.02847875554958271,\n \"mc1\": 0.3671970624235006,\n\
\ \"mc1_stderr\": 0.01687480500145318,\n \"mc2\": 0.5254959632468497,\n\
\ \"mc2_stderr\": 0.014732861007836748\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6083617747440273,\n \"acc_stderr\": 0.014264122124938213,\n\
\ \"acc_norm\": 0.6518771331058021,\n \"acc_norm_stderr\": 0.013921008595179344\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6477793268273252,\n\
\ \"acc_stderr\": 0.004766860907171532,\n \"acc_norm\": 0.8324039036048596,\n\
\ \"acc_norm_stderr\": 0.003727438786513392\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.03785714465066653,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.03785714465066653\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.9078947368421053,\n \"acc_stderr\": 0.02353268597044349,\n\
\ \"acc_norm\": 0.9078947368421053,\n \"acc_norm_stderr\": 0.02353268597044349\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.79,\n\
\ \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \
\ \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.8264150943396227,\n \"acc_stderr\": 0.02331058302600625,\n\
\ \"acc_norm\": 0.8264150943396227,\n \"acc_norm_stderr\": 0.02331058302600625\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8958333333333334,\n\
\ \"acc_stderr\": 0.025545239210256917,\n \"acc_norm\": 0.8958333333333334,\n\
\ \"acc_norm_stderr\": 0.025545239210256917\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n\
\ \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7861271676300579,\n\
\ \"acc_stderr\": 0.031265112061730445,\n \"acc_norm\": 0.7861271676300579,\n\
\ \"acc_norm_stderr\": 0.031265112061730445\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n\
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n\
\ \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.026148818018424502,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.026148818018424502\n \
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5614035087719298,\n\
\ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.5614035087719298,\n\
\ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.0333333333333333,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.0333333333333333\n },\n\
\ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6878306878306878,\n\
\ \"acc_stderr\": 0.023865206836972585,\n \"acc_norm\": 0.6878306878306878,\n\
\ \"acc_norm_stderr\": 0.023865206836972585\n },\n \"harness|hendrycksTest-formal_logic|5\"\
: {\n \"acc\": 0.5476190476190477,\n \"acc_stderr\": 0.044518079590553275,\n\
\ \"acc_norm\": 0.5476190476190477,\n \"acc_norm_stderr\": 0.044518079590553275\n\
\ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.53,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-high_school_biology|5\"\
: {\n \"acc\": 0.8935483870967742,\n \"acc_stderr\": 0.01754510295165663,\n\
\ \"acc_norm\": 0.8935483870967742,\n \"acc_norm_stderr\": 0.01754510295165663\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.6600985221674877,\n \"acc_stderr\": 0.033327690684107895,\n \"\
acc_norm\": 0.6600985221674877,\n \"acc_norm_stderr\": 0.033327690684107895\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\"\
: 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8606060606060606,\n \"acc_stderr\": 0.0270459488258654,\n\
\ \"acc_norm\": 0.8606060606060606,\n \"acc_norm_stderr\": 0.0270459488258654\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9393939393939394,\n \"acc_stderr\": 0.01699999492742161,\n \"\
acc_norm\": 0.9393939393939394,\n \"acc_norm_stderr\": 0.01699999492742161\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9896373056994818,\n \"acc_stderr\": 0.007308424386792194,\n\
\ \"acc_norm\": 0.9896373056994818,\n \"acc_norm_stderr\": 0.007308424386792194\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8102564102564103,\n \"acc_stderr\": 0.019880165406588768,\n\
\ \"acc_norm\": 0.8102564102564103,\n \"acc_norm_stderr\": 0.019880165406588768\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.5185185185185185,\n \"acc_stderr\": 0.030464621718895322,\n \
\ \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.030464621718895322\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8277310924369747,\n \"acc_stderr\": 0.02452866497130543,\n \
\ \"acc_norm\": 0.8277310924369747,\n \"acc_norm_stderr\": 0.02452866497130543\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.5364238410596026,\n \"acc_stderr\": 0.04071636065944217,\n \"\
acc_norm\": 0.5364238410596026,\n \"acc_norm_stderr\": 0.04071636065944217\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9302752293577982,\n \"acc_stderr\": 0.010919426411848605,\n \"\
acc_norm\": 0.9302752293577982,\n \"acc_norm_stderr\": 0.010919426411848605\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6851851851851852,\n \"acc_stderr\": 0.0316746870682898,\n \"acc_norm\"\
: 0.6851851851851852,\n \"acc_norm_stderr\": 0.0316746870682898\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9215686274509803,\n\
\ \"acc_stderr\": 0.018869514646658928,\n \"acc_norm\": 0.9215686274509803,\n\
\ \"acc_norm_stderr\": 0.018869514646658928\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8945147679324894,\n \"acc_stderr\": 0.019995560723758535,\n\
\ \"acc_norm\": 0.8945147679324894,\n \"acc_norm_stderr\": 0.019995560723758535\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n\
\ \"acc_stderr\": 0.026241132996407252,\n \"acc_norm\": 0.8116591928251121,\n\
\ \"acc_norm_stderr\": 0.026241132996407252\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.02871877688934232,\n\
\ \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.02871877688934232\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8677685950413223,\n \"acc_stderr\": 0.0309227883204458,\n \"acc_norm\"\
: 0.8677685950413223,\n \"acc_norm_stderr\": 0.0309227883204458\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n\
\ \"acc_stderr\": 0.03434300243630999,\n \"acc_norm\": 0.8518518518518519,\n\
\ \"acc_norm_stderr\": 0.03434300243630999\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8588957055214724,\n \"acc_stderr\": 0.027351605518389752,\n\
\ \"acc_norm\": 0.8588957055214724,\n \"acc_norm_stderr\": 0.027351605518389752\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6696428571428571,\n\
\ \"acc_stderr\": 0.044642857142857116,\n \"acc_norm\": 0.6696428571428571,\n\
\ \"acc_norm_stderr\": 0.044642857142857116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.03393295729761011,\n\
\ \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.03393295729761011\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n\
\ \"acc_stderr\": 0.015537514263253878,\n \"acc_norm\": 0.9401709401709402,\n\
\ \"acc_norm_stderr\": 0.015537514263253878\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \
\ \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9195402298850575,\n\
\ \"acc_stderr\": 0.009726831316141866,\n \"acc_norm\": 0.9195402298850575,\n\
\ \"acc_norm_stderr\": 0.009726831316141866\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8497109826589595,\n \"acc_stderr\": 0.019239318783904717,\n\
\ \"acc_norm\": 0.8497109826589595,\n \"acc_norm_stderr\": 0.019239318783904717\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5664804469273743,\n\
\ \"acc_stderr\": 0.016574027219517635,\n \"acc_norm\": 0.5664804469273743,\n\
\ \"acc_norm_stderr\": 0.016574027219517635\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.8562091503267973,\n \"acc_stderr\": 0.020091188936043714,\n\
\ \"acc_norm\": 0.8562091503267973,\n \"acc_norm_stderr\": 0.020091188936043714\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8520900321543409,\n\
\ \"acc_stderr\": 0.020163253806284125,\n \"acc_norm\": 0.8520900321543409,\n\
\ \"acc_norm_stderr\": 0.020163253806284125\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8858024691358025,\n \"acc_stderr\": 0.017696832447213894,\n\
\ \"acc_norm\": 0.8858024691358025,\n \"acc_norm_stderr\": 0.017696832447213894\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.6347517730496454,\n \"acc_stderr\": 0.02872386385328127,\n \
\ \"acc_norm\": 0.6347517730496454,\n \"acc_norm_stderr\": 0.02872386385328127\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6271186440677966,\n\
\ \"acc_stderr\": 0.012350630058333364,\n \"acc_norm\": 0.6271186440677966,\n\
\ \"acc_norm_stderr\": 0.012350630058333364\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.8272058823529411,\n \"acc_stderr\": 0.02296606758558181,\n\
\ \"acc_norm\": 0.8272058823529411,\n \"acc_norm_stderr\": 0.02296606758558181\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8186274509803921,\n \"acc_stderr\": 0.015588643495370457,\n \
\ \"acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.015588643495370457\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7545454545454545,\n\
\ \"acc_stderr\": 0.041220665028782855,\n \"acc_norm\": 0.7545454545454545,\n\
\ \"acc_norm_stderr\": 0.041220665028782855\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.025991117672813296,\n\
\ \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.025991117672813296\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n\
\ \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n\
\ \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.96,\n \"acc_stderr\": 0.01969463855669321,\n \
\ \"acc_norm\": 0.96,\n \"acc_norm_stderr\": 0.01969463855669321\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n\
\ \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n\
\ \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.02464806896136616,\n\
\ \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.02464806896136616\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3671970624235006,\n\
\ \"mc1_stderr\": 0.01687480500145318,\n \"mc2\": 0.5254959632468497,\n\
\ \"mc2_stderr\": 0.014732861007836748\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.823993685872139,\n \"acc_stderr\": 0.010703090882320705\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7156937073540561,\n \
\ \"acc_stderr\": 0.012425078188395977\n }\n}\n```"
repo_url: https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|arc:challenge|25_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|gsm8k|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hellaswag|10_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T03-04-27.948723.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-19T03-04-27.948723.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- '**/details_harness|winogrande|5_2024-01-19T03-04-27.948723.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-19T03-04-27.948723.parquet'
- config_name: results
data_files:
- split: 2024_01_19T03_04_27.948723
path:
- results_2024-01-19T03-04-27.948723.parquet
- split: latest
path:
- results_2024-01-19T03-04-27.948723.parquet
---
# Dataset Card for Evaluation run of CausalLM/72B-preview-llamafied-qwen-llamafy
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CausalLM/72B-preview-llamafied-qwen-llamafy](https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-19T03:04:27.948723](https://huggingface.co/datasets/open-llm-leaderboard/details_CausalLM__72B-preview-llamafied-qwen-llamafy/blob/main/results_2024-01-19T03-04-27.948723.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.765694970151033,
"acc_stderr": 0.02794684069092645,
"acc_norm": 0.769421113917392,
"acc_norm_stderr": 0.02847875554958271,
"mc1": 0.3671970624235006,
"mc1_stderr": 0.01687480500145318,
"mc2": 0.5254959632468497,
"mc2_stderr": 0.014732861007836748
},
"harness|arc:challenge|25": {
"acc": 0.6083617747440273,
"acc_stderr": 0.014264122124938213,
"acc_norm": 0.6518771331058021,
"acc_norm_stderr": 0.013921008595179344
},
"harness|hellaswag|10": {
"acc": 0.6477793268273252,
"acc_stderr": 0.004766860907171532,
"acc_norm": 0.8324039036048596,
"acc_norm_stderr": 0.003727438786513392
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.03785714465066653,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.03785714465066653
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.9078947368421053,
"acc_stderr": 0.02353268597044349,
"acc_norm": 0.9078947368421053,
"acc_norm_stderr": 0.02353268597044349
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8264150943396227,
"acc_stderr": 0.02331058302600625,
"acc_norm": 0.8264150943396227,
"acc_norm_stderr": 0.02331058302600625
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8958333333333334,
"acc_stderr": 0.025545239210256917,
"acc_norm": 0.8958333333333334,
"acc_norm_stderr": 0.025545239210256917
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7861271676300579,
"acc_stderr": 0.031265112061730445,
"acc_norm": 0.7861271676300579,
"acc_norm_stderr": 0.031265112061730445
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.8,
"acc_stderr": 0.026148818018424502,
"acc_norm": 0.8,
"acc_norm_stderr": 0.026148818018424502
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5614035087719298,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.5614035087719298,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.8,
"acc_stderr": 0.0333333333333333,
"acc_norm": 0.8,
"acc_norm_stderr": 0.0333333333333333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6878306878306878,
"acc_stderr": 0.023865206836972585,
"acc_norm": 0.6878306878306878,
"acc_norm_stderr": 0.023865206836972585
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5476190476190477,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.5476190476190477,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8935483870967742,
"acc_stderr": 0.01754510295165663,
"acc_norm": 0.8935483870967742,
"acc_norm_stderr": 0.01754510295165663
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6600985221674877,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.6600985221674877,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8606060606060606,
"acc_stderr": 0.0270459488258654,
"acc_norm": 0.8606060606060606,
"acc_norm_stderr": 0.0270459488258654
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9393939393939394,
"acc_stderr": 0.01699999492742161,
"acc_norm": 0.9393939393939394,
"acc_norm_stderr": 0.01699999492742161
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9896373056994818,
"acc_stderr": 0.007308424386792194,
"acc_norm": 0.9896373056994818,
"acc_norm_stderr": 0.007308424386792194
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8102564102564103,
"acc_stderr": 0.019880165406588768,
"acc_norm": 0.8102564102564103,
"acc_norm_stderr": 0.019880165406588768
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.030464621718895322,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.030464621718895322
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8277310924369747,
"acc_stderr": 0.02452866497130543,
"acc_norm": 0.8277310924369747,
"acc_norm_stderr": 0.02452866497130543
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5364238410596026,
"acc_stderr": 0.04071636065944217,
"acc_norm": 0.5364238410596026,
"acc_norm_stderr": 0.04071636065944217
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9302752293577982,
"acc_stderr": 0.010919426411848605,
"acc_norm": 0.9302752293577982,
"acc_norm_stderr": 0.010919426411848605
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6851851851851852,
"acc_stderr": 0.0316746870682898,
"acc_norm": 0.6851851851851852,
"acc_norm_stderr": 0.0316746870682898
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.018869514646658928,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.018869514646658928
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8945147679324894,
"acc_stderr": 0.019995560723758535,
"acc_norm": 0.8945147679324894,
"acc_norm_stderr": 0.019995560723758535
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8116591928251121,
"acc_stderr": 0.026241132996407252,
"acc_norm": 0.8116591928251121,
"acc_norm_stderr": 0.026241132996407252
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8778625954198473,
"acc_stderr": 0.02871877688934232,
"acc_norm": 0.8778625954198473,
"acc_norm_stderr": 0.02871877688934232
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8677685950413223,
"acc_stderr": 0.0309227883204458,
"acc_norm": 0.8677685950413223,
"acc_norm_stderr": 0.0309227883204458
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8518518518518519,
"acc_stderr": 0.03434300243630999,
"acc_norm": 0.8518518518518519,
"acc_norm_stderr": 0.03434300243630999
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8588957055214724,
"acc_stderr": 0.027351605518389752,
"acc_norm": 0.8588957055214724,
"acc_norm_stderr": 0.027351605518389752
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6696428571428571,
"acc_stderr": 0.044642857142857116,
"acc_norm": 0.6696428571428571,
"acc_norm_stderr": 0.044642857142857116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8640776699029126,
"acc_stderr": 0.03393295729761011,
"acc_norm": 0.8640776699029126,
"acc_norm_stderr": 0.03393295729761011
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253878,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253878
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.88,
"acc_stderr": 0.032659863237109066,
"acc_norm": 0.88,
"acc_norm_stderr": 0.032659863237109066
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9195402298850575,
"acc_stderr": 0.009726831316141866,
"acc_norm": 0.9195402298850575,
"acc_norm_stderr": 0.009726831316141866
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8497109826589595,
"acc_stderr": 0.019239318783904717,
"acc_norm": 0.8497109826589595,
"acc_norm_stderr": 0.019239318783904717
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5664804469273743,
"acc_stderr": 0.016574027219517635,
"acc_norm": 0.5664804469273743,
"acc_norm_stderr": 0.016574027219517635
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8562091503267973,
"acc_stderr": 0.020091188936043714,
"acc_norm": 0.8562091503267973,
"acc_norm_stderr": 0.020091188936043714
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8520900321543409,
"acc_stderr": 0.020163253806284125,
"acc_norm": 0.8520900321543409,
"acc_norm_stderr": 0.020163253806284125
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8858024691358025,
"acc_stderr": 0.017696832447213894,
"acc_norm": 0.8858024691358025,
"acc_norm_stderr": 0.017696832447213894
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6347517730496454,
"acc_stderr": 0.02872386385328127,
"acc_norm": 0.6347517730496454,
"acc_norm_stderr": 0.02872386385328127
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6271186440677966,
"acc_stderr": 0.012350630058333364,
"acc_norm": 0.6271186440677966,
"acc_norm_stderr": 0.012350630058333364
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8272058823529411,
"acc_stderr": 0.02296606758558181,
"acc_norm": 0.8272058823529411,
"acc_norm_stderr": 0.02296606758558181
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8186274509803921,
"acc_stderr": 0.015588643495370457,
"acc_norm": 0.8186274509803921,
"acc_norm_stderr": 0.015588643495370457
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7545454545454545,
"acc_stderr": 0.041220665028782855,
"acc_norm": 0.7545454545454545,
"acc_norm_stderr": 0.041220665028782855
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7918367346938775,
"acc_stderr": 0.025991117672813296,
"acc_norm": 0.7918367346938775,
"acc_norm_stderr": 0.025991117672813296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101706,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101706
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.96,
"acc_stderr": 0.01969463855669321,
"acc_norm": 0.96,
"acc_norm_stderr": 0.01969463855669321
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685515,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685515
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.02464806896136616,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.02464806896136616
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3671970624235006,
"mc1_stderr": 0.01687480500145318,
"mc2": 0.5254959632468497,
"mc2_stderr": 0.014732861007836748
},
"harness|winogrande|5": {
"acc": 0.823993685872139,
"acc_stderr": 0.010703090882320705
},
"harness|gsm8k|5": {
"acc": 0.7156937073540561,
"acc_stderr": 0.012425078188395977
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
arieg/bw_spec_cls_4_17_noise_200 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1644'
'1': '1649'
'2': '1661'
'3': '1663'
splits:
- name: train
num_bytes: 44066224.0
num_examples: 800
- name: test
num_bytes: 1101943.0
num_examples: 20
download_size: 22426644
dataset_size: 45168167.0
---
# Dataset Card for "bw_spec_cls_4_17_noise_200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567165 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloomz-3b
metrics: []
dataset_name: futin/guess
dataset_config: vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
luizlzg/prefeitura_dataset_alltopics_v1 | ---
task_categories:
- text-generation
language:
- pt
configs:
- config_name: default
data_files:
- split: train
path: dataset_instrutivo_alltopics_treino*
- split: test
path: dataset_instrutivo_alltopics_teste*
- split: validation
path: dataset_instrutivo_alltopics_validation*
--- |
causal-lm/auto_cot_closed | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 550496.0264900662
num_examples: 2754
- name: validation
num_bytes: 64211.29770992367
num_examples: 304
download_size: 320971
dataset_size: 614707.3241999899
---
# Dataset Card for "auto_cot_closed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bias-amplified-splits/qqp | ---
license: cc-by-4.0
dataset_info:
- config_name: minority_examples
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42391456
num_examples: 297735
- name: train.anti_biased
num_bytes: 8509364
num_examples: 66111
- name: validation.biased
num_bytes: 4698206
num_examples: 32968
- name: validation.anti_biased
num_bytes: 955548
num_examples: 7462
download_size: 70726976
dataset_size: 56554574
- config_name: partial_input
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_duplicate
'1': duplicate
- name: idx
dtype: int32
splits:
- name: train.biased
num_bytes: 42788212
num_examples: 297735
- name: train.anti_biased
num_bytes: 8112608
num_examples: 66111
- name: validation.biased
num_bytes: 4712327
num_examples: 33084
- name: validation.anti_biased
num_bytes: 941427
num_examples: 7346
download_size: 70726976
dataset_size: 56554574
task_categories:
- text-classification
language:
- en
pretty_name: Quora Questions Pairs
---
# Dataset Card for Bias-amplified Splits for QQP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Fighting Bias with Bias repo](https://github.com/schwartz-lab-nlp/fight-bias-with-bias)
- **Paper:** [arXiv](https://arxiv.org/abs/2305.18917)
- **Point of Contact:** [Yuval Reif](mailto:yuval.reif@mail.huji.ac.il)
- **Original Dataset's Paper:** [GLUE](https://arxiv.org/abs/1804.07461)
### Dataset Summary
Bias-amplified splits is a novel evaluation framework to assess model robustness, by amplifying dataset biases in the training data and challenging models to generalize beyond them. This framework is defined by a bias-amplified training set and a hard, anti-biased test set, which we automatically extract from existing datasets using model-based methods.
Our experiments show that the identified anti-biased examples are naturally challenging for models, and moreover, models trained on bias-amplified data exhibit dramatic performance drops on anti-biased examples, which are not mitigated by common approaches to improve generalization.
Here we apply our framework to the Quora Question Pairs dataset (QQP), a dataset composed of question pairs where the task is to determine if the questions are paraphrases of each other (have the same meaning).
Our evaluation framework can be applied to any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations.
#### Evaluation Results (DeBERTa-large)
##### For splits based on minority examples:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 77.6 |
| Biased training split | 87.0 | 36.8 |
##### For splits based on partial-input model:
| Training Data \ Test Data | Original test | Anti-biased test |
|---------------------------|---------------|------------------|
| Original training split | 93.0 | 81.3 |
| Biased training split | 90.3 | 63.9 |
#### Loading the Data
```
from datasets import load_dataset
# choose which bias detection method to use for the bias-amplified splits: either "minority_examples" or "partial_input"
dataset = load_dataset("bias-amplified-splits/qqp", "minority_examples")
# use the biased training split and anti-biased test split
train_dataset = dataset['train.biased']
eval_dataset = dataset['validation.anti_biased']
```
## Dataset Structure
### Data Instances
Data instances are taken directly from QQP (GLUE version), and re-split into biased and anti-biased subsets. Here is an example of an instance from the dataset:
```
{
"idx": 56,
"question1": "How do I buy used car in India?",
"question2": "Which used car should I buy in India?",
"label": 0
}
```
### Data Fields
- `idx`: unique identifier for the example within its original data splits (e.g., validation set)
- `question1`: a question asked on Quora
- `question2`: a question asked on Quora
- `label`: one of `0` and `1` (`not duplicate` and `duplicate`)
### Data Splits
Bias-amplified splits require a method to detect *biased* and *anti-biased* examples in datasets. We release bias-amplified splits based created with each of these two methods:
- **Minority examples**: A novel method we introduce that leverages representation learning and clustering for identifying anti-biased *minority examples* (Tu et al., 2020)—examples that defy common statistical patterns found in the rest of the dataset.
- **Partial-input baselines**: A common method for identifying biased examples containing annotation artifacts in a dataset, which examines the performance of models that are restricted to using only part of the input. Such models, if successful, are bound to rely on unintended or spurious patterns in the dataset.
Using each of the two methods, we split each of the original train and test splits into biased and anti-biased subsets. See the [paper](https://arxiv.org/abs/2305.18917) for more details.
#### Minority Examples
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 32968 |
| Validation - anti-biased | 7462 |
#### Partial-input Baselines
| Dataset Split | Number of Instances in Split |
|--------------------------|------------------------------|
| Train - biased | 297735 |
| Train - anti-biased | 66111 |
| Validation - biased | 33084 |
| Validation - anti-biased | 7346 |
## Dataset Creation
### Curation Rationale
NLP models often rely on superficial cues known as *dataset biases* to achieve impressive performance, and can fail on examples where these biases do not hold. To develop more robust, unbiased models, recent work aims to filter bisased examples from training sets. We argue that in order to encourage the development of robust models, we should in fact **amplify** biases in the training sets, while adopting the challenge set approach and making test sets anti-biased. To implement our approach, we introduce a simple framework that can be applied automatically to any existing dataset to use it for testing model robustness.
### Annotations
#### Annotation process
No new annotations are required to create bias-amplified splits. Existing data instances are split into *biased* and *anti-biased* splits based on automatic model-based methods to detect such examples.
## Considerations for Using the Data
### Social Impact of Dataset
Bias-amplified splits were created to promote the development of robust NLP models that do not rely on superficial biases and correlations, and provide more challenging evaluation of existing systems.
### Discussion of Biases
We propose to use bias-amplified splits to complement benchmarks with challenging evaluation settings that test model robustness, in addition to the dataset’s main training and test sets. As such, while existing dataset biases are *amplified* during training with bias-amplified splits, these splits are intended primarily for model evaluation, to expose the bias-exploiting behaviors of models and to identify more robsut models and effective robustness interventions.
## Additional Information
### Dataset Curators
Bias-amplified splits were introduced by Yuval Reif and Roy Schwartz from the [Hebrew University of Jerusalem](https://schwartz-lab-huji.github.io).
QQP data was released by Quora and released under the GLUE benchmark.
### Citation Information
```
@misc{reif2023fighting,
title = "Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases",
author = "Yuval Reif and Roy Schwartz",
month = may,
year = "2023",
url = "https://arxiv.org/pdf/2305.18917",
}
```
Source dataset:
```
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
``` |
ml-projects/clickbait-ml_dataset | ---
license: openrail
---
|
nicholasKluge/Pt-Corpus | ---
dataset_info:
features:
- name: text
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 16220765175.988096
num_examples: 5768246
download_size: 11478008666
dataset_size: 16220765175.988096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: other
task_categories:
- text-generation
language:
- pt
tags:
- portuguese
- language-modeling
pretty_name: Pt-Corpus
size_categories:
- 1M<n<10M
---
# Portuguese-Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nkluge-correa.github.io/TeenyTinyLlama/
- **Repository:** https://github.com/Nkluge-correa/TeenyTinyLlama
- **Paper:** [TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese](https://arxiv.org/abs/2401.16640)
- **Point of Contact:** [AIRES at PUCRS](mailto:nicholas@airespucrs.org)
### Dataset Summary
Portuguese-Corpus is a concatenation of several portions of Brazilian Portuguese datasets found in the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-generation&language=language:pt&sort=trending).
In a tokenized format, the dataset (uncompressed) weighs 50 GB and has approximately 4.1B tokens. This version does not have instructional content.
### Supported Tasks and Leaderboards
This dataset can be utilized for tasks involving language modeling.
### Languages
Portuguese.
## Dataset Structure
### Data Instances
The dataset consists of the following features:
- **text:** a string of text in Portuguese.
- **metadata:** the source where that string originated.
### Data Fields
```python
{
"text": "A inteligência artificial (de sigla: IA; do inglês: artificial intelligence, de sigla: AI) é um campo de estudo multidisciplinar que abrange varias áreas do conhecimento.",
"metadata": "source: https://huggingface.co/datasets/graelo/wikipedia"
}
```
### Data Splits
Available splits are `train`.
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/Pt-Corpus", split='train')
# If you don't want to download the entire dataset, set streaming to `True`
dataset = load_dataset("nicholasKluge/Pt-Corpus", split='train', streaming=True)
```
## Dataset Creation
### Curation Rationale
This dataset was developed as part of the [TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese](https://arxiv.org/abs/2401.16640) paper. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits.
### Source Data
#### Initial Data Collection and Normalization
We utilized some of the filters used in Rae et al. ([2021](https://arxiv.org/abs/2112.11446)), besides using a [fine-tuned BERTimbau](https://huggingface.co/nicholasKluge/ToxicityModelPT) to exclude samples classified above a pre-defined toxicity threshold.
#### Who are the source language producers?
All text samples are native to Portuguese or translated from other languages to Portuguese (slight contamination of other languages should also be expected).
### Annotations
#### Annotation process
Portuguese-Corpus is a concatenation of several portions of Brazilian Portuguese datasets found in the [Hub](https://huggingface.co/datasets?task_categories=task_categories:text-generation&language=language:pt&sort=trending). We utilized some of the filters used in Rae et al. ([2021](https://arxiv.org/abs/2112.11446)), besides using a [fine-tuned BERTimbau](https://huggingface.co/nicholasKluge/ToxicityModelPT) to exclude samples classified above a pre-defined toxicity threshold.
#### Who are the annotators?
[Nicholas Kluge Corrêa](mailto:nicholas@airespucrs.org).
### Personal and Sensitive Information
This dataset, sourced from web scraping, may potentially contain personal and sensitive information, alongside offensive, toxic, and disturbing language.
## Considerations for Using the Data
### Social Impact of Dataset
The presence of personal and sensitive information within the dataset raises concerns about privacy and data protection, potentially leading to breaches of individuals' confidentiality and security. Furthermore, the inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity. Therefore, careful handling and ethical considerations are essential to mitigate these potential social impacts and promote responsible dataset use.
### Discussion of Biases
The inclusion of offensive, toxic, and disturbing language in the dataset poses risks of perpetuating harmful behaviors and attitudes, contributing to the normalization of hate speech and online toxicity.
### Other Known Limitations
A significant portion of the data within the dataset has been translated using translation engines, potentially resulting in corrupted samples of both language and code. While useful for quickly converting text between languages, translation engines often struggle with accurately preserving the syntax, semantics, and context of programming languages. As a result, the translated code may contain errors, syntax inconsistencies, or even introduce vulnerabilities, rendering it unreliable or unusable for its intended purpose.
## Additional Information
### Dataset Curators
[Nicholas Kluge Corrêa](mailto:nicholas@airespucrs.org).
### Licensing Information
The following datasets (_only training splits are a part of the corpus_) and respective licenses form the Portuguese-Corpus:
- [Wikipedia](https://huggingface.co/datasets/graelo/wikipedia) (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
- [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) (License: [ODC-By](https://opendatacommons.org/licenses/by/1-0/), [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
- [OSCAR](https://huggingface.co/datasets/eduagarcia/OSCAR-2301-pt_dedup) (License: [cc0-1.0](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information))
- [CCc100](https://huggingface.co/datasets/eduagarcia/cc100-pt) (License: [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/))
- [Roots Wikiquote](https://huggingface.co/datasets/bigscience-data/roots_pt_wikiquote) (License: [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/))
- [Roots Ted Talks](https://huggingface.co/datasets/bigscience-data/roots_pt_ted_talks_iwslt) (License: [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en))
### Citation Information
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
```
### Contributions
If you would like to contribute, contact me at [nicholas@airespucrs.org](mailto:nicholas@airespucrs.org)!
|
Zarakun/ukrainian_news | ---
language:
- uk
tags:
- uk
- news
---
## Info
The dataset consists 1919 ukrainian news divided by 15 categories:
- business
- economy
- education
- fashion
- financy
- fun
- health
- kino
- porady
- realestate
- show
- smachnonews
- sport
- tech
- zakordon
## Loading
There are 2 different ways to downlaod the dataset
Firstly, you can do it manually, by downloading zip file from data/dataset.zip
Secondly you can use this python script loading script:
```
>>> train_dataset = load_dataset("Zarakun/ukrainian_news", split="train")
``` |
gmongaras/Anime_Subtitle_data2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 131618936
num_examples: 1913619
download_size: 79562238
dataset_size: 131618936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Just a ton of anime subtitle data sourced from https://www.kitsunekko.net that's hopefully somewhat clean. I am trying to break lines into different characters. |
hanesh007/mtdataset_exp | ---
license: apache-2.0
---
|
wentingzhao/anthropic-hh-first-prompt | ---
dataset_info:
features:
- name: user
dtype: string
- name: system
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 931647
num_examples: 8552
download_size: 472764
dataset_size: 931647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "anthropic-hh-first-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
theojiang/image-text-dataset-subset-300k-captions_only_with_latents | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: CLIP_text_latent
sequence: float32
- name: SD_VAE_image_latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 57507528731.75
num_examples: 380530
download_size: 60531502833
dataset_size: 57507528731.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/tomimi_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tomimi/トミミ/特米米 (Arknights)
This is the dataset of tomimi/トミミ/特米米 (Arknights), containing 500 images and their tags.
The core tags of this character are `pointy_ears, crocodilian_tail, tail, ahoge, yellow_eyes, grey_hair, long_hair, large_tail, breasts, streaked_hair, multicolored_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 961.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomimi_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 784.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomimi_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1360 | 1.48 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tomimi_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tomimi_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, ass, black_panties, black_thighhighs, looking_at_viewer, solo, thighs, torn_thighhighs, bare_shoulders, from_behind, holding_staff, looking_back, detached_sleeves, simple_background, white_background, hood_up, partially_fingerless_gloves, long_sleeves, cowboy_shot, boots, grey_gloves, skindentation |
| 1 | 8 |  |  |  |  |  | 1girl, bare_shoulders, black_footwear, black_thighhighs, detached_sleeves, holding_staff, looking_at_viewer, solo, thighs, torn_thighhighs, ass, goggles_around_neck, grey_gloves, hood_up, partially_fingerless_gloves, black_panties, high_heel_boots, skindentation |
| 2 | 12 |  |  |  |  |  | 1girl, detached_sleeves, grey_gloves, holding_staff, hood_up, looking_at_viewer, partially_fingerless_gloves, solo, torn_thighhighs, white_flower, bare_shoulders, black_thighhighs, goggles_around_neck, long_sleeves, thighs, dress, shirt, simple_background, closed_mouth, sitting, hair_over_one_eye, white_background |
| 3 | 5 |  |  |  |  |  | 1girl, bare_shoulders, closed_mouth, detached_sleeves, goggles_around_neck, grey_gloves, looking_at_viewer, partially_fingerless_gloves, simple_background, solo, upper_body, white_background, white_flower, holding_staff, hood_up, long_sleeves, shirt, white_hair |
| 4 | 6 |  |  |  |  |  | 1girl, black_panties, black_thighhighs, from_behind, looking_at_viewer, looking_back, simple_background, solo, thighs, torn_thighhighs, white_background, ass_focus, hood |
| 5 | 16 |  |  |  |  |  | 1girl, bare_shoulders, hair_flower, looking_at_viewer, official_alternate_costume, solo, white_flower, white_one-piece_swimsuit, casual_one-piece_swimsuit, simple_background, white_background, white_hair, hair_between_eyes, head_wreath, thighs, blue_flower, very_long_hair, thigh_strap, blush, wrist_cuffs, cowboy_shot, parted_lips, from_behind, looking_back, sitting, small_breasts |
| 6 | 6 |  |  |  |  |  | 1girl, bare_shoulders, casual_one-piece_swimsuit, hair_flower, holding_staff, looking_at_viewer, official_alternate_costume, solo, white_flower, white_one-piece_swimsuit, blue_flower, head_wreath, very_long_hair, white_hair, hair_between_eyes, barefoot, parted_lips, thigh_strap, thighs, wariza |
| 7 | 6 |  |  |  |  |  | 1girl, bare_shoulders, black_gloves, looking_at_viewer, official_alternate_costume, solo, two_side_up, black_dress, cowboy_shot, hair_over_one_eye, partially_fingerless_gloves, simple_background, sleeveless_dress, black_choker, white_flower, black_hair, blush, holding, necklace, standing, torn_dress, white_background |
| 8 | 16 |  |  |  |  |  | 1girl, bare_shoulders, solo, two_side_up, hair_over_one_eye, looking_at_viewer, black_dress, official_alternate_costume, outdoors, black_gloves, black_hair, blue_sky, day, sleeveless_dress, open_mouth, jewelry, partially_fingerless_gloves, :d, cloud, cowboy_shot, water |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | ass | black_panties | black_thighhighs | looking_at_viewer | solo | thighs | torn_thighhighs | bare_shoulders | from_behind | holding_staff | looking_back | detached_sleeves | simple_background | white_background | hood_up | partially_fingerless_gloves | long_sleeves | cowboy_shot | boots | grey_gloves | skindentation | black_footwear | goggles_around_neck | high_heel_boots | white_flower | dress | shirt | closed_mouth | sitting | hair_over_one_eye | upper_body | white_hair | ass_focus | hood | hair_flower | official_alternate_costume | white_one-piece_swimsuit | casual_one-piece_swimsuit | hair_between_eyes | head_wreath | blue_flower | very_long_hair | thigh_strap | blush | wrist_cuffs | parted_lips | small_breasts | barefoot | wariza | black_gloves | two_side_up | black_dress | sleeveless_dress | black_choker | black_hair | holding | necklace | standing | torn_dress | outdoors | blue_sky | day | open_mouth | jewelry | :d | cloud | water |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------|:----------------|:-------------------|:--------------------|:-------|:---------|:------------------|:-----------------|:--------------|:----------------|:---------------|:-------------------|:--------------------|:-------------------|:----------|:------------------------------|:---------------|:--------------|:--------|:--------------|:----------------|:-----------------|:----------------------|:------------------|:---------------|:--------|:--------|:---------------|:----------|:--------------------|:-------------|:-------------|:------------|:-------|:--------------|:-----------------------------|:---------------------------|:----------------------------|:--------------------|:--------------|:--------------|:-----------------|:--------------|:--------|:--------------|:--------------|:----------------|:-----------|:---------|:---------------|:--------------|:--------------|:-------------------|:---------------|:-------------|:----------|:-----------|:-----------|:-------------|:-----------|:-----------|:------|:-------------|:----------|:-----|:--------|:--------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | | X | | | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | | | X | X | X | X | X | X | | X | | X | X | X | X | X | X | | | X | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | X | X | | | X | | X | | X | X | X | X | X | X | | | X | | | X | | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | X | X | X | X | X | | X | | X | | X | X | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 16 |  |  |  |  |  | X | | | | X | X | X | | X | X | | X | | X | X | | | | X | | | | | | | X | | | | X | | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | X | X | X | | X | | X | | | | | | | | | | | | | | | X | | | | | | | X | | | X | X | X | X | X | X | X | X | X | | | X | | X | X | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | X | X | | | X | | | | | X | X | | X | | X | | | | | | | X | | | | | X | | | | | | X | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 8 | 16 |  |  |  |  |  | X | | | | X | X | | | X | | | | | | | | X | | X | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | X | X | X | X | | X | | | | | X | X | X | X | X | X | X | X |
|
jmichaelov/inverse_scaling_prize-pattern_matching_suppression | ---
license: cc-by-4.0
---
|
jjenny/tsefsesefsfa | ---
license: bsd-2-clause
---
|
open-llm-leaderboard/details_tyson0420__mixtral_stack_llama | ---
pretty_name: Evaluation run of tyson0420/mixtral_stack_llama
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [tyson0420/mixtral_stack_llama](https://huggingface.co/tyson0420/mixtral_stack_llama)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tyson0420__mixtral_stack_llama\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-15T08:21:27.970055](https://huggingface.co/datasets/open-llm-leaderboard/details_tyson0420__mixtral_stack_llama/blob/main/results_2024-02-15T08-21-27.970055.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.28193427391738846,\n\
\ \"acc_stderr\": 0.03169270439313508,\n \"acc_norm\": 0.2845747380485041,\n\
\ \"acc_norm_stderr\": 0.03252371590260296,\n \"mc1\": 0.20563035495716034,\n\
\ \"mc1_stderr\": 0.014148482219460972,\n \"mc2\": 0.38221457050909724,\n\
\ \"mc2_stderr\": 0.015352799377174492\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.302901023890785,\n \"acc_stderr\": 0.013428241573185347,\n\
\ \"acc_norm\": 0.3455631399317406,\n \"acc_norm_stderr\": 0.013896938461145682\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.37860983867755427,\n\
\ \"acc_stderr\": 0.0048404936031662075,\n \"acc_norm\": 0.5023899621589325,\n\
\ \"acc_norm_stderr\": 0.004989724408664516\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.23703703703703705,\n\
\ \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.23703703703703705,\n\
\ \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.2565789473684211,\n \"acc_stderr\": 0.0355418036802569,\n\
\ \"acc_norm\": 0.2565789473684211,\n \"acc_norm_stderr\": 0.0355418036802569\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.3018867924528302,\n \"acc_stderr\": 0.028254200344438662,\n\
\ \"acc_norm\": 0.3018867924528302,\n \"acc_norm_stderr\": 0.028254200344438662\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.24305555555555555,\n\
\ \"acc_stderr\": 0.03586879280080342,\n \"acc_norm\": 0.24305555555555555,\n\
\ \"acc_norm_stderr\": 0.03586879280080342\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909284,\n \
\ \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909284\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n\
\ \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23699421965317918,\n\
\ \"acc_stderr\": 0.03242414757483099,\n \"acc_norm\": 0.23699421965317918,\n\
\ \"acc_norm_stderr\": 0.03242414757483099\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.16666666666666666,\n \"acc_stderr\": 0.03708284662416544,\n\
\ \"acc_norm\": 0.16666666666666666,\n \"acc_norm_stderr\": 0.03708284662416544\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n\
\ \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3191489361702128,\n \"acc_stderr\": 0.03047297336338004,\n\
\ \"acc_norm\": 0.3191489361702128,\n \"acc_norm_stderr\": 0.03047297336338004\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n\
\ \"acc_stderr\": 0.04049339297748142,\n \"acc_norm\": 0.24561403508771928,\n\
\ \"acc_norm_stderr\": 0.04049339297748142\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2827586206896552,\n \"acc_stderr\": 0.03752833958003337,\n\
\ \"acc_norm\": 0.2827586206896552,\n \"acc_norm_stderr\": 0.03752833958003337\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2619047619047619,\n \"acc_stderr\": 0.022644212615525218,\n \"\
acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.022644212615525218\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n\
\ \"acc_stderr\": 0.035122074123020514,\n \"acc_norm\": 0.19047619047619047,\n\
\ \"acc_norm_stderr\": 0.035122074123020514\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2967741935483871,\n\
\ \"acc_stderr\": 0.025988500792411894,\n \"acc_norm\": 0.2967741935483871,\n\
\ \"acc_norm_stderr\": 0.025988500792411894\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132977,\n\
\ \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132977\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\"\
: 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.03453131801885415,\n\
\ \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.03453131801885415\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.3383838383838384,\n \"acc_stderr\": 0.03371124142626304,\n \"\
acc_norm\": 0.3383838383838384,\n \"acc_norm_stderr\": 0.03371124142626304\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.34196891191709844,\n \"acc_stderr\": 0.03423465100104282,\n\
\ \"acc_norm\": 0.34196891191709844,\n \"acc_norm_stderr\": 0.03423465100104282\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.31025641025641026,\n \"acc_stderr\": 0.02345467488940429,\n\
\ \"acc_norm\": 0.31025641025641026,\n \"acc_norm_stderr\": 0.02345467488940429\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2518518518518518,\n \"acc_stderr\": 0.02646611753895992,\n \
\ \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.02646611753895992\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.030388353551886845,\n\
\ \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.030388353551886845\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.3559633027522936,\n \"acc_stderr\": 0.020528559278244214,\n \"\
acc_norm\": 0.3559633027522936,\n \"acc_norm_stderr\": 0.020528559278244214\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4398148148148148,\n \"acc_stderr\": 0.033851779760448106,\n \"\
acc_norm\": 0.4398148148148148,\n \"acc_norm_stderr\": 0.033851779760448106\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.24019607843137256,\n \"acc_stderr\": 0.02998373305591361,\n \"\
acc_norm\": 0.24019607843137256,\n \"acc_norm_stderr\": 0.02998373305591361\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.2911392405063291,\n \"acc_stderr\": 0.029571601065753374,\n \
\ \"acc_norm\": 0.2911392405063291,\n \"acc_norm_stderr\": 0.029571601065753374\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.36771300448430494,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.36771300448430494,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.2392638036809816,\n \"acc_stderr\": 0.033519538795212696,\n\
\ \"acc_norm\": 0.2392638036809816,\n \"acc_norm_stderr\": 0.033519538795212696\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n\
\ \"acc_stderr\": 0.040598672469526864,\n \"acc_norm\": 0.24107142857142858,\n\
\ \"acc_norm_stderr\": 0.040598672469526864\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.33980582524271846,\n \"acc_stderr\": 0.04689765937278135,\n\
\ \"acc_norm\": 0.33980582524271846,\n \"acc_norm_stderr\": 0.04689765937278135\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.25213675213675213,\n\
\ \"acc_stderr\": 0.02844796547623102,\n \"acc_norm\": 0.25213675213675213,\n\
\ \"acc_norm_stderr\": 0.02844796547623102\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.28991060025542786,\n\
\ \"acc_stderr\": 0.016225017944770957,\n \"acc_norm\": 0.28991060025542786,\n\
\ \"acc_norm_stderr\": 0.016225017944770957\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.29190751445086704,\n \"acc_stderr\": 0.02447699407624732,\n\
\ \"acc_norm\": 0.29190751445086704,\n \"acc_norm_stderr\": 0.02447699407624732\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808864,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808864\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.32679738562091504,\n \"acc_stderr\": 0.02685729466328141,\n\
\ \"acc_norm\": 0.32679738562091504,\n \"acc_norm_stderr\": 0.02685729466328141\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.29260450160771706,\n\
\ \"acc_stderr\": 0.025839898334877983,\n \"acc_norm\": 0.29260450160771706,\n\
\ \"acc_norm_stderr\": 0.025839898334877983\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.3055555555555556,\n \"acc_stderr\": 0.025630824975621344,\n\
\ \"acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.025630824975621344\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.23404255319148937,\n \"acc_stderr\": 0.0252578613594324,\n \
\ \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.0252578613594324\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24967405475880053,\n\
\ \"acc_stderr\": 0.01105453837783233,\n \"acc_norm\": 0.24967405475880053,\n\
\ \"acc_norm_stderr\": 0.01105453837783233\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.30514705882352944,\n \"acc_stderr\": 0.027971541370170598,\n\
\ \"acc_norm\": 0.30514705882352944,\n \"acc_norm_stderr\": 0.027971541370170598\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.272875816993464,\n \"acc_stderr\": 0.01802047414839358,\n \
\ \"acc_norm\": 0.272875816993464,\n \"acc_norm_stderr\": 0.01802047414839358\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2818181818181818,\n\
\ \"acc_stderr\": 0.04309118709946459,\n \"acc_norm\": 0.2818181818181818,\n\
\ \"acc_norm_stderr\": 0.04309118709946459\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.23673469387755103,\n \"acc_stderr\": 0.02721283588407316,\n\
\ \"acc_norm\": 0.23673469387755103,\n \"acc_norm_stderr\": 0.02721283588407316\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.263681592039801,\n\
\ \"acc_stderr\": 0.031157150869355558,\n \"acc_norm\": 0.263681592039801,\n\
\ \"acc_norm_stderr\": 0.031157150869355558\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3192771084337349,\n\
\ \"acc_stderr\": 0.0362933532994786,\n \"acc_norm\": 0.3192771084337349,\n\
\ \"acc_norm_stderr\": 0.0362933532994786\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.03126781714663179,\n\
\ \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.03126781714663179\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.20563035495716034,\n\
\ \"mc1_stderr\": 0.014148482219460972,\n \"mc2\": 0.38221457050909724,\n\
\ \"mc2_stderr\": 0.015352799377174492\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5730071033938438,\n \"acc_stderr\": 0.01390187807257506\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492606\n }\n}\n```"
repo_url: https://huggingface.co/tyson0420/mixtral_stack_llama
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|arc:challenge|25_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|gsm8k|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hellaswag|10_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T08-21-27.970055.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-15T08-21-27.970055.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- '**/details_harness|winogrande|5_2024-02-15T08-21-27.970055.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-15T08-21-27.970055.parquet'
- config_name: results
data_files:
- split: 2024_02_15T08_21_27.970055
path:
- results_2024-02-15T08-21-27.970055.parquet
- split: latest
path:
- results_2024-02-15T08-21-27.970055.parquet
---
# Dataset Card for Evaluation run of tyson0420/mixtral_stack_llama
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [tyson0420/mixtral_stack_llama](https://huggingface.co/tyson0420/mixtral_stack_llama) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_tyson0420__mixtral_stack_llama",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-15T08:21:27.970055](https://huggingface.co/datasets/open-llm-leaderboard/details_tyson0420__mixtral_stack_llama/blob/main/results_2024-02-15T08-21-27.970055.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.28193427391738846,
"acc_stderr": 0.03169270439313508,
"acc_norm": 0.2845747380485041,
"acc_norm_stderr": 0.03252371590260296,
"mc1": 0.20563035495716034,
"mc1_stderr": 0.014148482219460972,
"mc2": 0.38221457050909724,
"mc2_stderr": 0.015352799377174492
},
"harness|arc:challenge|25": {
"acc": 0.302901023890785,
"acc_stderr": 0.013428241573185347,
"acc_norm": 0.3455631399317406,
"acc_norm_stderr": 0.013896938461145682
},
"harness|hellaswag|10": {
"acc": 0.37860983867755427,
"acc_stderr": 0.0048404936031662075,
"acc_norm": 0.5023899621589325,
"acc_norm_stderr": 0.004989724408664516
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2565789473684211,
"acc_stderr": 0.0355418036802569,
"acc_norm": 0.2565789473684211,
"acc_norm_stderr": 0.0355418036802569
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.3018867924528302,
"acc_stderr": 0.028254200344438662,
"acc_norm": 0.3018867924528302,
"acc_norm_stderr": 0.028254200344438662
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.24305555555555555,
"acc_stderr": 0.03586879280080342,
"acc_norm": 0.24305555555555555,
"acc_norm_stderr": 0.03586879280080342
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.23699421965317918,
"acc_stderr": 0.03242414757483099,
"acc_norm": 0.23699421965317918,
"acc_norm_stderr": 0.03242414757483099
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.16666666666666666,
"acc_stderr": 0.03708284662416544,
"acc_norm": 0.16666666666666666,
"acc_norm_stderr": 0.03708284662416544
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3191489361702128,
"acc_stderr": 0.03047297336338004,
"acc_norm": 0.3191489361702128,
"acc_norm_stderr": 0.03047297336338004
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748142,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748142
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2827586206896552,
"acc_stderr": 0.03752833958003337,
"acc_norm": 0.2827586206896552,
"acc_norm_stderr": 0.03752833958003337
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2619047619047619,
"acc_stderr": 0.022644212615525218,
"acc_norm": 0.2619047619047619,
"acc_norm_stderr": 0.022644212615525218
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.19047619047619047,
"acc_stderr": 0.035122074123020514,
"acc_norm": 0.19047619047619047,
"acc_norm_stderr": 0.035122074123020514
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2967741935483871,
"acc_stderr": 0.025988500792411894,
"acc_norm": 0.2967741935483871,
"acc_norm_stderr": 0.025988500792411894
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132977,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132977
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.03453131801885415,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.03453131801885415
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3383838383838384,
"acc_stderr": 0.03371124142626304,
"acc_norm": 0.3383838383838384,
"acc_norm_stderr": 0.03371124142626304
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.34196891191709844,
"acc_stderr": 0.03423465100104282,
"acc_norm": 0.34196891191709844,
"acc_norm_stderr": 0.03423465100104282
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.31025641025641026,
"acc_stderr": 0.02345467488940429,
"acc_norm": 0.31025641025641026,
"acc_norm_stderr": 0.02345467488940429
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.02646611753895992,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.02646611753895992
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.030388353551886845,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.030388353551886845
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.03802039760107903,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.03802039760107903
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3559633027522936,
"acc_stderr": 0.020528559278244214,
"acc_norm": 0.3559633027522936,
"acc_norm_stderr": 0.020528559278244214
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4398148148148148,
"acc_stderr": 0.033851779760448106,
"acc_norm": 0.4398148148148148,
"acc_norm_stderr": 0.033851779760448106
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.24019607843137256,
"acc_stderr": 0.02998373305591361,
"acc_norm": 0.24019607843137256,
"acc_norm_stderr": 0.02998373305591361
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2911392405063291,
"acc_stderr": 0.029571601065753374,
"acc_norm": 0.2911392405063291,
"acc_norm_stderr": 0.029571601065753374
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.36771300448430494,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.36771300448430494,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.03768335959728745,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.03768335959728745
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2392638036809816,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.2392638036809816,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.040598672469526864,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.040598672469526864
},
"harness|hendrycksTest-management|5": {
"acc": 0.33980582524271846,
"acc_stderr": 0.04689765937278135,
"acc_norm": 0.33980582524271846,
"acc_norm_stderr": 0.04689765937278135
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.25213675213675213,
"acc_stderr": 0.02844796547623102,
"acc_norm": 0.25213675213675213,
"acc_norm_stderr": 0.02844796547623102
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.26,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.26,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.28991060025542786,
"acc_stderr": 0.016225017944770957,
"acc_norm": 0.28991060025542786,
"acc_norm_stderr": 0.016225017944770957
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.29190751445086704,
"acc_stderr": 0.02447699407624732,
"acc_norm": 0.29190751445086704,
"acc_norm_stderr": 0.02447699407624732
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808864,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808864
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.32679738562091504,
"acc_stderr": 0.02685729466328141,
"acc_norm": 0.32679738562091504,
"acc_norm_stderr": 0.02685729466328141
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.29260450160771706,
"acc_stderr": 0.025839898334877983,
"acc_norm": 0.29260450160771706,
"acc_norm_stderr": 0.025839898334877983
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.025630824975621344,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.025630824975621344
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.0252578613594324,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.0252578613594324
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24967405475880053,
"acc_stderr": 0.01105453837783233,
"acc_norm": 0.24967405475880053,
"acc_norm_stderr": 0.01105453837783233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.30514705882352944,
"acc_stderr": 0.027971541370170598,
"acc_norm": 0.30514705882352944,
"acc_norm_stderr": 0.027971541370170598
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.272875816993464,
"acc_stderr": 0.01802047414839358,
"acc_norm": 0.272875816993464,
"acc_norm_stderr": 0.01802047414839358
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2818181818181818,
"acc_stderr": 0.04309118709946459,
"acc_norm": 0.2818181818181818,
"acc_norm_stderr": 0.04309118709946459
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.23673469387755103,
"acc_stderr": 0.02721283588407316,
"acc_norm": 0.23673469387755103,
"acc_norm_stderr": 0.02721283588407316
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.263681592039801,
"acc_stderr": 0.031157150869355558,
"acc_norm": 0.263681592039801,
"acc_norm_stderr": 0.031157150869355558
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3192771084337349,
"acc_stderr": 0.0362933532994786,
"acc_norm": 0.3192771084337349,
"acc_norm_stderr": 0.0362933532994786
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.03126781714663179,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.03126781714663179
},
"harness|truthfulqa:mc|0": {
"mc1": 0.20563035495716034,
"mc1_stderr": 0.014148482219460972,
"mc2": 0.38221457050909724,
"mc2_stderr": 0.015352799377174492
},
"harness|winogrande|5": {
"acc": 0.5730071033938438,
"acc_stderr": 0.01390187807257506
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492606
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
bz-arc13/tool_learning_v3 | ---
dataset_info:
features:
- name: functions
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: function_call
struct:
- name: arguments
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: role
dtype: string
splits:
- name: g1
num_bytes: 94244187.43350647
num_examples: 15750
- name: g2
num_bytes: 35120858.30517241
num_examples: 5139
- name: g3
num_bytes: 13134530.220500596
num_examples: 1674
- name: luban
num_bytes: 11650474.0
num_examples: 2111
- name: v1
num_bytes: 67859990.0
num_examples: 44736
download_size: 60727852
dataset_size: 222010039.9591795
configs:
- config_name: default
data_files:
- split: g1
path: data/g1-*
- split: g2
path: data/g2-*
- split: g3
path: data/g3-*
- split: luban
path: data/luban-*
- split: v1
path: data/v1-*
---
|
tasksource/lsat-ar | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: id_string
dtype: string
- name: answers
sequence: string
- name: label
dtype: int64
- name: question
dtype: string
splits:
- name: validation
num_bytes: 216357
num_examples: 231
- name: train
num_bytes: 1413916
num_examples: 1585
- name: test
num_bytes: 214880
num_examples: 230
download_size: 337968
dataset_size: 1845153
---
# Dataset Card for "lsat-ar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rcds/wikipedia-for-mask-filling | ---
annotations_creators:
- other
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "wikipedia pages chunked for fill-mask"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# preprocessed version of rcds/wikipedia-persons-masked
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Contains ~70k pages from wikipedia, each describing a person. For each page, the person described in the text
is masked with a <mask> token. The ground truth for every mask is provided.
Each row contains a part of a wiki page, specified by the size parameter which limits the maximum size in number of tokens per text chunk.
for each chunk the expected name for each mask is given.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask, but can also be used for other tasks such as question answering,
e.g. "Who is <mask>?"
### Languages
*english only*
## Dataset Structure
In /data find different versions of the full dataset, with original and paraphrased versions as well as chunked to 4096 and 512 tokens.
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/wikipedia-persons-masked', split='train', type='original', size='512')
```
### Data Fields
Columns are:
- texts: the text chunks
- masks: the names for each of the masks in the chunks
### Data Splits
There are no splits, only a default train.
## Dataset Creation
Created by using the tokenizer from allenai/longformer-base-4096 for the 4096 token per chunk version,
and the xml-roberta-large tokenizer for the 512 token version. Chunks are split to fit those token sizes,
with the splits ensuring no words are split in half.
Possible improvements: Last chunk of a page might be much shorter, could join part of the previous one to have more tokens
in the last chunk.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@skatinger](https://github.com/skatinger) for adding this dataset.
|
davidberenstein1957/ultrafeedback-binarized-cleaned-and-filtered-avg-rating-split | ---
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: average_rating
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
splits:
- name: train
num_bytes: 23989266.3470273
num_examples: 5348
- name: test
num_bytes: 5997316.586756825
num_examples: 1337
download_size: 18602901
dataset_size: 29986582.933784127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
pharaouk/ff | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3109254774
num_examples: 3278633
download_size: 0
dataset_size: 3109254774
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SkunkData-Corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dronemeout/scans | ---
license: mit
---
|
zhangshuoming/math_23k_double_standalone | ---
dataset_info:
features:
- name: text
struct:
- name: asm
dtype: string
- name: c
dtype: string
splits:
- name: train
num_bytes: 27029874
num_examples: 21104
download_size: 0
dataset_size: 27029874
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "math_23k_double_standalone"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zhengxuanzenwu/fair_glue_sst2 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 4681603
num_examples: 67349
- name: validation
num_bytes: 53126.0
num_examples: 436
- name: test
num_bytes: 106252
num_examples: 872
download_size: 3221811
dataset_size: 4840981.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
vilm/refinedweb-1m-medium | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5454844691
num_examples: 1000000
download_size: 3346600355
dataset_size: 5454844691
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# RefinedWeb 1M Medium
Curated RefinedWeb with medium context length (2048 <= ctx_len <= 8192) |
PyWebSol/ru-slimorca-300k | ---
license: apache-2.0
dataset_info:
features:
- name: role
sequence: string
- name: content
sequence: string
splits:
- name: train
num_bytes: 1011056619
num_examples: 300013
download_size: 452519640
dataset_size: 1011056619
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
- question-answering
- conversational
language:
- ru
---
Переведенная на русский язык часть датасета `Open-Orca/SlimOrca`.
Заказать перевод вашего датасета на любой язык мира: https://t.me/PyWebSol |
open-llm-leaderboard/details_facebook__opt-1.3b | ---
pretty_name: Evaluation run of facebook/opt-1.3b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_facebook__opt-1.3b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T03:17:25.770385](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__opt-1.3b/blob/main/results_2023-10-19T03-17-25.770385.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n\
\ \"em_stderr\": 0.0004320097346038933,\n \"f1\": 0.05017722315436251,\n\
\ \"f1_stderr\": 0.0012387308214165103,\n \"acc\": 0.2994953245415047,\n\
\ \"acc_stderr\": 0.0074273230901261535\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.0004320097346038933,\n\
\ \"f1\": 0.05017722315436251,\n \"f1_stderr\": 0.0012387308214165103\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492619\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5974743488555643,\n \"acc_stderr\": 0.013782866831703044\n\
\ }\n}\n```"
repo_url: https://huggingface.co/facebook/opt-1.3b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|arc:challenge|25_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T03_17_25.770385
path:
- '**/details_harness|drop|3_2023-10-19T03-17-25.770385.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T03-17-25.770385.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T03_17_25.770385
path:
- '**/details_harness|gsm8k|5_2023-10-19T03-17-25.770385.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T03-17-25.770385.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hellaswag|10_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T14:50:30.777525.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T14:50:30.777525.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T14:50:30.777525.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T03_17_25.770385
path:
- '**/details_harness|winogrande|5_2023-10-19T03-17-25.770385.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T03-17-25.770385.parquet'
- config_name: results
data_files:
- split: 2023_08_18T14_50_30.777525
path:
- results_2023-08-18T14:50:30.777525.parquet
- split: 2023_10_19T03_17_25.770385
path:
- results_2023-10-19T03-17-25.770385.parquet
- split: latest
path:
- results_2023-10-19T03-17-25.770385.parquet
---
# Dataset Card for Evaluation run of facebook/opt-1.3b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/facebook/opt-1.3b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_facebook__opt-1.3b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T03:17:25.770385](https://huggingface.co/datasets/open-llm-leaderboard/details_facebook__opt-1.3b/blob/main/results_2023-10-19T03-17-25.770385.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.0004320097346038933,
"f1": 0.05017722315436251,
"f1_stderr": 0.0012387308214165103,
"acc": 0.2994953245415047,
"acc_stderr": 0.0074273230901261535
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.0004320097346038933,
"f1": 0.05017722315436251,
"f1_stderr": 0.0012387308214165103
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492619
},
"harness|winogrande|5": {
"acc": 0.5974743488555643,
"acc_stderr": 0.013782866831703044
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
cestwc/anthology | ---
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: year
dtype: int64
- name: abstract
dtype: string
- name: pages
dtype: string
- name: queryID
dtype: string
- name: query
dtype: string
- name: paperID
dtype: string
- name: include
dtype: int64
splits:
- name: train
num_bytes: 2533008313
num_examples: 3370094
download_size: 1053579996
dataset_size: 2533008313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "anthology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
P1ayer-1/niv2_submix_distribution | ---
dataset_info:
features:
- name: task_name
dtype: string
- name: num_demos
dtype: int64
splits:
- name: train
num_bytes: 78099
num_examples: 1556
download_size: 34712
dataset_size: 78099
---
# Dataset Card for "niv2_submix_distribution"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CVasNLPExperiments/cv-as-nlp-vision-example-flan-xxl | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: train
num_bytes: 119377.0
num_examples: 10
download_size: 119894
dataset_size: 119377.0
---
# Dataset Card for "cv-as-nlp-vision-example-flan-xxl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fun1021183/cvt2_GS3_2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 540743793.3
num_examples: 3900
- name: test
num_bytes: 332492834.56
num_examples: 2480
download_size: 787636091
dataset_size: 873236627.8599999
---
# Dataset Card for "cvt2_GS3_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FelipeGuerra/Colombian_Spanish_Cyberbullying_Dataset_1 | ---
license: mit
---
### Dataset Summary
This dataset consists of 3570 tweets, which were manually labeled as cyberbullying or not cyberbullying. A distinguishing feature of this dataset is that for a given word, there is an annotated tweet labeled as cyberbullying that contains that word, and another tweet labeled as not cyberbullying with the same word. This is made possible because the context in which the same word is used can vary, leading to tweets being classified differently.
For instance, tweets in the not cyberbullying category predominantly contain obscene words that, in their particular context, do not correspond with cyberbullying. An example is “Marica, se me olvidó ver el partido”. Additionally, the not cyberbullying category, to a lesser extent, includes tweets sourced from trends in the Colombian region. Twitter trends reflect the most popular topics and conversations in a given area at a specific time, essentially capturing what people are discussing and sharing online in that geographical locale.
Trend-based tweets were utilized for those instances where it was not feasible to obtain not cyberbullying tweets containing a specific offensive word or phrase, such as “ojala te violen”. Conversely, tweets labeled as cyberbullying might not always contain words or phrases that are deemed strong or obscene, like in the example “te voy a buscar”.
The distribution of cyberbullying tweets and non-cyberbullying tweets was the same. The keywords and phrases used in the creation of the dataset were selected based on the categories provided in the article Guidelines for the Fine-Grained Analysis of Cyberbullying authored by Cynthia Van Hee, Ben Verhoeven, Els Lefever, Guy De Pauw, Walter Daelemans, and Véronique Hoste. Four categories were included: insult, threat, curse, and defamation. The insult category involves the use of offensive words intended to verbally hurt another person, while threat aims to harm the victim's integrity. Curse includes words that wish harm or misfortune upon a person, and defamation seeks to damage the victim’s reputation. These categories were chosen to capture a broad representation of the forms in which cyberbullying can manifest. The tweets were labeled by an occupational therapist associated with the project. |
CyberHarem/milady_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of milady (Fire Emblem)
This is the dataset of milady (Fire Emblem), containing 15 images and their tags.
The core tags of this character are `red_hair, red_eyes, short_hair, earrings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 15 | 12.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milady_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 15 | 7.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milady_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 24 | 12.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milady_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 15 | 10.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milady_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 24 | 17.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/milady_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/milady_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, red_armor, solo, circlet, elbow_gloves, jewelry, belt, boots, shoulder_armor, skirt, thighhighs, spear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | red_armor | solo | circlet | elbow_gloves | jewelry | belt | boots | shoulder_armor | skirt | thighhighs | spear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------|:----------|:---------------|:----------|:-------|:--------|:-----------------|:--------|:-------------|:--------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X |
|
sharmaarushi17/HPCPerfOpt-Open-ended | ---
license: openrail
pretty_name: HPCPerfOpt (HPC Performance Optimization Benchmark)
configs:
- config_name: text
data_files:
- split: test
path: "text.csv"
- config_name: code
data_files:
- split: test
path: "code.csv"
task_categories:
- question-answering
tags:
- code
size_categories:
- n<1K
---
# Dataset Card for HPCPerfOpt (HPC Performance Optimization Dataset)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is a question answering dataset for OpenMP Performance Optimization questions. It contains open-ended questions of 2 types:
1. What is the performance issue in the given code snippet? - Text answers
2. Please generate the optimized version of the given OpenMP code for better performance. - Code answers
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
djemerson7k/testedo7 | ---
license: openrail
---
|
AravindVadlapudi02/Torgo_train-30_test-70 | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': control
'1': pathology
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1862083748
num_examples: 1939
- name: test
num_bytes: 4345502300
num_examples: 4525
download_size: 753824940
dataset_size: 6207586048
---
# Dataset Card for "Torgo_train-30_test-70"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pseudolab/autotrain-data-Medical_Terminology_Zephyr_2 | ---
dataset_info:
features:
- name: tags
dtype: string
- name: categories
dtype: string
- name: topics
dtype: string
- name: title
dtype: string
- name: es-title
dtype: string
- name: url
dtype: string
- name: es-bite
dtype: string
- name: audience
dtype: string
- name: segment
dtype: string
- name: insurance-status
dtype: string
- name: state
dtype: string
- name: condition
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 123044
num_examples: 257
- name: validation
num_bytes: 123044
num_examples: 257
download_size: 128192
dataset_size: 246088
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-Medical_Terminology_Zephyr_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
d0rj/boolq-ru | ---
annotations_creators:
- crowdsourced
language_creators:
- translated
language:
- ru
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- boolq
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: boolq
pretty_name: BoolQ (ru)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: bool
- name: passage
dtype: string
splits:
- name: train
num_bytes: 10819511
num_examples: 9427
- name: validation
num_bytes: 3710872
num_examples: 3270
download_size: 7376712
dataset_size: 14530383
---
# boolq-ru
Translated version of [boolq](https://huggingface.co/datasets/boolq) dataset into Russian.
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions) |
Nerfgun3/lightning_style | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Lightning Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by lightning_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/HNHRcZg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8B31Umz.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/88sHalA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/WhlLomb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/a1Usv3u.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
taaredikahan23/medical-llama2-5k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2165103
num_examples: 5452
download_size: 869829
dataset_size: 2165103
---
# Dataset Card for "medical-llama2-5k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-inverse-scaling__quote-repetition-inverse-scaling__quot-3aff83-1695059594 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- inverse-scaling/quote-repetition
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: inverse-scaling/quote-repetition
dataset_config: inverse-scaling--quote-repetition
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: inverse-scaling/quote-repetition
* Config: inverse-scaling--quote-repetition
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MicPie](https://huggingface.co/MicPie) for evaluating this model. |
chirunder/admits | ---
dataset_info:
features:
- name: gre_quant
dtype: int64
- name: gre_verbal
dtype: int64
- name: gre_awa
dtype: float64
- name: gre_total
dtype: int64
- name: toefl
dtype: int64
- name: year
dtype: int64
- name: term
dtype: string
- name: grade_scale
dtype: int64
- name: ielts
dtype: float64
- name: grade_score
dtype: float64
- name: undergrad_major
dtype: string
- name: undergrad_university
dtype: string
- name: admits
sequence: string
- name: rejects
sequence: string
splits:
- name: train
num_bytes: 89517
num_examples: 320
download_size: 22092
dataset_size: 89517
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "admits_fyi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.