datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
yangyz1230/promoter_no_tata_not_filtered | ---
dataset_info:
features:
- name: name
dtype: string
- name: sequence
dtype: string
- name: chrom
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: strand
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 16645569
num_examples: 47524
- name: test
num_bytes: 1841048
num_examples: 5271
download_size: 8884566
dataset_size: 18486617
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
fomobench/TalloS | ---
license: mit
---
|
yfyeung/medical | ---
license: cc-by-4.0
---
# A dataset of simulated patient-physician medical interviews with a focus on respiratory cases
Paper link: https://www.nature.com/articles/s41597-022-01423-1
## Dataset Description
The simulated medical conversation dataset is available on figshare.com.
The dataset is divided into two sets of files: audio files of the simulated conversations in mp3 format, and the transcripts of the audio files as text files.
There are 272 mp3 audio files and 272 corresponding transcript text files.
Each file is titled with three characters and four digits. RES stands for respiratory, GAS represents gastrointestinal, CAR is cardiovascular, MSK is musculoskeletal, DER is dermatological, and the four following digits represent the case number of the respective disease category. |
CyberHarem/flandre_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of flandre/フランドル/弗兰德尔 (Azur Lane)
This is the dataset of flandre/フランドル/弗兰德尔 (Azur Lane), containing 42 images and their tags.
The core tags of this character are `long_hair, bangs, white_hair, twintails, purple_eyes, breasts, hat, small_breasts, bow, ribbon, grey_eyes, low_twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 42 | 83.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 42 | 37.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 110 | 87.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 42 | 68.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 110 | 139.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/flandre_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/flandre_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_thighhighs, garter_straps, long_sleeves, looking_at_viewer, solo, white_leotard, blush, grey_hair, thighs, closed_mouth, hair_ornament, smile, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_thighhighs | garter_straps | long_sleeves | looking_at_viewer | solo | white_leotard | blush | grey_hair | thighs | closed_mouth | hair_ornament | smile | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:----------------|:---------------|:--------------------|:-------|:----------------|:--------|:------------|:---------|:---------------|:----------------|:--------|:-------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
voidful/spoken-alpaca-gpt4 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: speech_input
dtype: string
- name: input_speaker
dtype: string
- name: output_speaker
dtype: string
- name: input_audio
dtype: audio
- name: output_audio
dtype: audio
splits:
- name: train
num_bytes: 13538156036.948
num_examples: 51349
download_size: 13717890829
dataset_size: 13538156036.948
---
# Dataset Card for "speech-alpaca-gpt4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/VisionClassification_test | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': abyssinian
'1': american bulldog
'2': american pit bull terrier
'3': basset hound
'4': beagle
'5': bengal
'6': birman
'7': bombay
'8': boxer
'9': british shorthair
'10': chihuahua
'11': egyptian mau
'12': english cocker spaniel
'13': english setter
'14': german shorthaired
'15': great pyrenees
'16': havanese
'17': japanese chin
'18': keeshond
'19': leonberger
'20': maine coon
'21': miniature pinscher
'22': newfoundland
'23': persian
'24': pomeranian
'25': pug
'26': ragdoll
'27': russian blue
'28': saint bernard
'29': samoyed
'30': scottish terrier
'31': shiba inu
'32': siamese
'33': sphynx
'34': staffordshire bull terrier
'35': wheaten terrier
'36': yorkshire terrier
- name: species
dtype:
class_label:
names:
'0': Cat
'1': Dog
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: clip_tag_ViT_L_14_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_oxfordpets
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: test_Attributes_ViT_L_14_descriptors_text_davinci_003_test
sequence: string
- name: test_Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
splits:
- name: test
num_bytes: 420552388.0
num_examples: 3669
download_size: 413055355
dataset_size: 420552388.0
---
# Dataset Card for "VisionClassification_Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/efi_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of efi (Fire Emblem)
This is the dataset of efi (Fire Emblem), containing 182 images and their tags.
The core tags of this character are `long_hair, braid, brown_eyes, twin_braids, blonde_hair, bow, breasts, brown_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 182 | 197.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/efi_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 182 | 121.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/efi_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 428 | 246.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/efi_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 182 | 177.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/efi_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 428 | 331.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/efi_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/efi_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | apron, dress, 1girl, open_mouth, simple_background, smile, solo, blush, short_sleeves, bracelet, capelet, hair_bow, white_background |
| 1 | 7 |  |  |  |  |  | 1girl, upper_body, dress, hat, open_mouth, smile, solo, hair_flower, holding_flower, simple_background, white_background |
| 2 | 13 |  |  |  |  |  | nipples, blush, 1boy, 1girl, hetero, penis, mosaic_censoring, open_mouth, solo_focus, large_breasts, sex, vaginal, cum_in_pussy, navel, completely_nude, medium_breasts, sweat |
| 3 | 5 |  |  |  |  |  | 1girl, blush, medium_breasts, nipples, pussy, solo, looking_at_viewer, anus, completely_nude, navel, on_back, open_mouth, smile, ass, bangs, closed_mouth, english_text, large_breasts, pillow, spread_legs, uncensored |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | apron | dress | 1girl | open_mouth | simple_background | smile | solo | blush | short_sleeves | bracelet | capelet | hair_bow | white_background | upper_body | hat | hair_flower | holding_flower | nipples | 1boy | hetero | penis | mosaic_censoring | solo_focus | large_breasts | sex | vaginal | cum_in_pussy | navel | completely_nude | medium_breasts | sweat | pussy | looking_at_viewer | anus | on_back | ass | bangs | closed_mouth | english_text | pillow | spread_legs | uncensored |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------------|:--------------------|:--------|:-------|:--------|:----------------|:-----------|:----------|:-----------|:-------------------|:-------------|:------|:--------------|:-----------------|:----------|:-------|:---------|:--------|:-------------------|:-------------|:----------------|:------|:----------|:---------------|:--------|:------------------|:-----------------|:--------|:--------|:--------------------|:-------|:----------|:------|:--------|:---------------|:---------------|:---------|:--------------|:-------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | | X | X | X | X | X | X | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | | | X | X | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | | | X | X | | X | X | X | | | | | | | | | | X | | | | | | X | | | | X | X | X | | X | X | X | X | X | X | X | X | X | X | X |
|
enip2473/human_chat | ---
license: mit
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 76877
num_examples: 213
download_size: 0
dataset_size: 76877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amongglue/youtube_subtitles | ---
license: mit
---
|
Estwld/atomic2020-origin-drop_duplicates | ---
dataset_info:
features:
- name: knowledge_type
dtype: string
- name: event
dtype: string
- name: relation
dtype: string
- name: relation_description
dtype: string
- name: tail
dtype: string
splits:
- name: train
num_bytes: 144625369
num_examples: 1008254
- name: validation
num_bytes: 13168434
num_examples: 94614
- name: test
num_bytes: 21485601
num_examples: 143736
download_size: 21558003
dataset_size: 179279404
---
# Dataset Card for "atomic2020-origin-drop_duplicates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pranjali97/ha-en_RL-grow2_valid | ---
dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: mt
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2573608
num_examples: 5565
download_size: 442593
dataset_size: 2573608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ha-en_RL-grow2_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LuvGuest/StrongHold | ---
license: cc
---
|
famepram/llama-2-jk48-demo | ---
license: other
license_name: readme.md
license_link: LICENSE
---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
# Dataset Card for "Llama-2-JKT48-FP"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on JKT$* knowledges.
The dataset is created for exercising training llama2. |
communityai/Open-Orca___1million-gpt-4 | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1839230023.0
num_examples: 994896
download_size: 978017926
dataset_size: 1839230023.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SauravMaheshkar/threads-ask-ubuntu | ---
license: unknown
task_categories:
- graph-ml
tags:
- chemistry
configs:
- config_name: transductive
data_files:
- split: train
path: "processed/transductive/train_df.csv"
- split: valid
path: "processed/transductive/val_df.csv"
- split: test
path: "processed/transductive/test_df.csv"
- config_name: inductive
data_files:
- split: train
path: "processed/inductive/train_df.csv"
- split: valid
path: "processed/inductive/val_df.csv"
- split: test
path: "processed/inductive/test_df.csv"
- config_name: raw
data_files: "raw/*.txt"
---
Source Paper: https://arxiv.org/abs/1802.06916
### Usage
```
from torch_geometric.datasets.cornell import CornellTemporalHyperGraphDataset
dataset = CornellTemporalHyperGraphDataset(root = "./", name="threads-ask-ubuntu", split="train")
```
### Citation
```misc
@article{Benson-2018-simplicial,
author = {Benson, Austin R. and Abebe, Rediet and Schaub, Michael T. and Jadbabaie, Ali and Kleinberg, Jon},
title = {Simplicial closure and higher-order link prediction},
year = {2018},
doi = {10.1073/pnas.1800683115},
publisher = {National Academy of Sciences},
issn = {0027-8424},
journal = {Proceedings of the National Academy of Sciences}
}
``` |
nz/lichess_data | ---
dataset_info:
features:
- name: result
dtype: string
- name: white_elo
dtype: string
- name: black_elo
dtype: string
- name: termination
dtype: string
- name: moves
dtype: string
- name: source
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 241142426138
num_examples: 685036846
download_size: 132014177208
dataset_size: 241142426138
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aswin1906/github-advisory-2021.csv | ---
license: apache-2.0
---
|
cheafdevo56/All_EASY_MEDIUM_HIC_Triplets | ---
dataset_info:
features:
- name: query
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: pos
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: neg
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: score
dtype: int64
- name: title
dtype: string
splits:
- name: train
num_bytes: 241023028.5
num_examples: 58500
- name: validation
num_bytes: 26780336.5
num_examples: 6500
download_size: 159432861
dataset_size: 267803365.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
liuyanchen1015/VALUE_qqp_null_genetive | ---
dataset_info:
features:
- name: question1
dtype: string
- name: question2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 344541
num_examples: 1928
- name: test
num_bytes: 3293958
num_examples: 18748
- name: train
num_bytes: 3073564
num_examples: 17249
download_size: 4181542
dataset_size: 6712063
---
# Dataset Card for "VALUE_qqp_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kitakaze_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kitakaze/北風/北风 (Azur Lane)
This is the dataset of kitakaze/北風/北风 (Azur Lane), containing 36 images and their tags.
The core tags of this character are `green_eyes, animal_ears, bangs, grey_hair, ribbon, hair_ribbon, breasts, ponytail, small_breasts, yellow_ribbon, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 36 | 46.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitakaze_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 36 | 27.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitakaze_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 86 | 57.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitakaze_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 36 | 40.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitakaze_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 86 | 80.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitakaze_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kitakaze_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 36 |  |  |  |  |  | 1girl, solo, looking_at_viewer, holding, pantyhose, sword, simple_background, blush, pleated_skirt, sheathed, white_skirt, smile, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | holding | pantyhose | sword | simple_background | blush | pleated_skirt | sheathed | white_skirt | smile | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:----------|:------------|:--------|:--------------------|:--------|:----------------|:-----------|:--------------|:--------|:-------------------|
| 0 | 36 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
martinastoppel/test | ---
license: apache-2.0
---
|
MohamedExperio/rvl | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 604018529.0
num_examples: 5000
- name: validation
num_bytes: 602268570.0
num_examples: 5000
- name: test
num_bytes: 603026890.0
num_examples: 5000
download_size: 0
dataset_size: 1809313989.0
---
# Dataset Card for "rvl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aizenSosuke/merged_dataset | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 14009291
num_examples: 13510
download_size: 8180801
dataset_size: 14009291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ikawrakow/imatrix-from-wiki-train | ---
license: apache-2.0
---
This repository contains importance matrix datasets for use with the improved quantization methods recently added to `llama.cpp`.
The importance matrix has been computed using `wiki.train.raw` as training data.
Hope the file names are self-explanatory.
To use, after cloning this repo, for e.g. Mixtral-8x7B and `Q4_K_M` quantization, use
```
./quantize --imatrix path_to_repo/mixtral-8x7b.imatrix path_to_model ggml-model-q4k-m.gguf Q4_K_M
```
|
one-sec-cv12/chunk_29 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 16335459648.5
num_examples: 170076
download_size: 14446652583
dataset_size: 16335459648.5
---
# Dataset Card for "chunk_29"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Francesco/apex-videogame | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': apex-game
'1': avatar
'2': object
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: apex-videogame
tags:
- rf100
---
# Dataset Card for apex-videogame
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/apex-videogame
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
apex-videogame
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/apex-videogame
### Citation Information
```
@misc{ apex-videogame,
title = { apex videogame Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/apex-videogame } },
url = { https://universe.roboflow.com/object-detection/apex-videogame },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
taeminlee/Ko-mrtydi | ---
language:
- ko
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- mrtydi
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- default
- queries
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 34828
num_examples: 1317
- name: dev
num_bytes: 8121
num_examples: 307
- name: test
num_bytes: 13482
num_examples: 492
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 609925549
num_examples: 1496126
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 135944
num_examples: 2019
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: dev
path: qrels/dev.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
# Ko-mrtydi
This dataset represents a conversion of the Korean (Ko) section from the [Mr.TyDI dataset](https://github.com/castorini/mr.tydi) into the [BeIR](https://github.com/beir-cellar/beir) format, making it compatible for use with [mteb](https://github.com/embeddings-benchmark/mteb). |
Prudhvi6e/Jimny | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_sst2_their_they | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3575
num_examples: 20
- name: test
num_bytes: 7423
num_examples: 44
- name: train
num_bytes: 103557
num_examples: 883
download_size: 53466
dataset_size: 114555
---
# Dataset Card for "MULTI_VALUE_sst2_their_they"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AmandaBai98/iptacopan_pubmed-dataset | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 76176
num_examples: 53
download_size: 49812
dataset_size: 76176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713007190 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 7011
num_examples: 15
download_size: 8387
dataset_size: 7011
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713007190"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ba2han/Reddit-instruct-curated_rated-1.2k | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
---
This is an LLM rated version of **euclaise/reddit-instruct-curated**, which is already a good dataset imo.
Only **post titles** and **comment texts** were rated as post texts can be confusing due to edits and seemingly out of context information.
First, **I filtered examples with <250 comment score**. Of course this is not a very efficient filtering as some pairs might have references to other comments or simply be unhelpful, yet upvoted due to Reddit hivemind.
Next I sent the example pairs with a rating prompt to Senku-Q2-XS and collected the numeric votes **(out of 10)**.
Overall there aren't many low rated examples. Here are three "worst" examples:

There are only 66 examples with <6 rate.
An example of highly upvoted but poorly rated pair:

**Let me know if I fucked up anything, I still have no idea what I am doing honestly.** |
CyberHarem/ff_fn49_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ff_fn49/FFFN49/FN-49 (Girls' Frontline)
This is the dataset of ff_fn49/FFFN49/FN-49 (Girls' Frontline), containing 20 images and their tags.
The core tags of this character are `green_eyes, long_hair, breasts, drill_hair, brown_hair, hat, blonde_hair, bangs, large_breasts, very_long_hair, medium_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 20 | 20.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ff_fn49_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 20 | 15.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ff_fn49_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 42 | 28.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ff_fn49_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 20 | 19.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ff_fn49_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 42 | 36.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ff_fn49_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ff_fn49_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | looking_at_viewer, 1girl, solo, blush, flower, white_gloves, bare_shoulders, open_mouth, pantyhose, drill_locks, simple_background, smile, white_background, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | solo | blush | flower | white_gloves | bare_shoulders | open_mouth | pantyhose | drill_locks | simple_background | smile | white_background | cleavage |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:-------|:--------|:---------|:---------------|:-----------------|:-------------|:------------|:--------------|:--------------------|:--------|:-------------------|:-----------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_KnutJaegersberg__gpt-2-xl-EvolInstruct | ---
pretty_name: Evaluation run of KnutJaegersberg/gpt-2-xl-EvolInstruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [KnutJaegersberg/gpt-2-xl-EvolInstruct](https://huggingface.co/KnutJaegersberg/gpt-2-xl-EvolInstruct)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__gpt-2-xl-EvolInstruct\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T18:02:57.671011](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__gpt-2-xl-EvolInstruct/blob/main/results_2023-09-17T18-02-57.671011.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0045092281879194635,\n\
\ \"em_stderr\": 0.000686134689909505,\n \"f1\": 0.039052013422818846,\n\
\ \"f1_stderr\": 0.0012293007940162644,\n \"acc\": 0.26831931822737687,\n\
\ \"acc_stderr\": 0.007544776234715419\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0045092281879194635,\n \"em_stderr\": 0.000686134689909505,\n\
\ \"f1\": 0.039052013422818846,\n \"f1_stderr\": 0.0012293007940162644\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.0010717793485492619\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5351223362273086,\n \"acc_stderr\": 0.014017773120881576\n\
\ }\n}\n```"
repo_url: https://huggingface.co/KnutJaegersberg/gpt-2-xl-EvolInstruct
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T18_02_57.671011
path:
- '**/details_harness|drop|3_2023-09-17T18-02-57.671011.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T18-02-57.671011.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T18_02_57.671011
path:
- '**/details_harness|gsm8k|5_2023-09-17T18-02-57.671011.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T18-02-57.671011.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T18_02_57.671011
path:
- '**/details_harness|winogrande|5_2023-09-17T18-02-57.671011.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T18-02-57.671011.parquet'
- config_name: results
data_files:
- split: 2023_09_17T18_02_57.671011
path:
- results_2023-09-17T18-02-57.671011.parquet
- split: latest
path:
- results_2023-09-17T18-02-57.671011.parquet
---
# Dataset Card for Evaluation run of KnutJaegersberg/gpt-2-xl-EvolInstruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KnutJaegersberg/gpt-2-xl-EvolInstruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [KnutJaegersberg/gpt-2-xl-EvolInstruct](https://huggingface.co/KnutJaegersberg/gpt-2-xl-EvolInstruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__gpt-2-xl-EvolInstruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T18:02:57.671011](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__gpt-2-xl-EvolInstruct/blob/main/results_2023-09-17T18-02-57.671011.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0045092281879194635,
"em_stderr": 0.000686134689909505,
"f1": 0.039052013422818846,
"f1_stderr": 0.0012293007940162644,
"acc": 0.26831931822737687,
"acc_stderr": 0.007544776234715419
},
"harness|drop|3": {
"em": 0.0045092281879194635,
"em_stderr": 0.000686134689909505,
"f1": 0.039052013422818846,
"f1_stderr": 0.0012293007940162644
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.0010717793485492619
},
"harness|winogrande|5": {
"acc": 0.5351223362273086,
"acc_stderr": 0.014017773120881576
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
misza222/OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results | ---
dataset_info:
features:
- name: images
dtype: image
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2753767.0
num_examples: 6
download_size: 2755049
dataset_size: 2753767.0
---
# Dataset Card for "OwczarekPodhalanski-dog-lr1e-06-max_train_steps1200-results"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
celsowm/imdb-reviews-pt-br | ---
dataset_info:
features:
- name: id
dtype: int64
- name: texto
dtype: string
- name: sentimento
dtype: int32
splits:
- name: train
num_bytes: 65805332
num_examples: 49459
download_size: 41015476
dataset_size: 65805332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "imdb-reviews-pt-br"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tejagoud/english_spain_20layouts | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 2840533918.5972295
num_examples: 10322
- name: test
num_bytes: 358183551.4668479
num_examples: 1291
- name: validation
num_bytes: 358702625.69392234
num_examples: 1290
download_size: 3000508409
dataset_size: 3557420095.758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
yilunzhao/KnowledgeMath | ---
license: mit
---
## KnowledgeMath Benchmark Description
**KnowledgeMath** is a knowledge-intensive dataset focused on mathematical reasoning within the domain of finance. It requires the model to comprehend specialized financial terminology and to interpret tabular data presented in the questions.
**KnowledgeMath** includes **1200 QA examples** across 7 key areas in finance. These examples were collected from financial experts and feature detailed solution annotations in Python format.
- Paper: https://arxiv.org/abs/2311.09797
- Code: https://github.com/yale-nlp/KnowledgeMath
- Leaderboard: will be released soon!
## KnowledgeMath Dataset Information
All the data examples were divided into two subsets: *validation* and *test*.
- **validation**: 200 examples used for model development, validation, or for those with limited computing resources.
- **test**: 1000 examples for standard evaluation. We will not publicly release the annotated solution and answer for the test set.
You can download this dataset by the following command:
```python
from datasets import load_dataset
dataset = load_dataset("yale-nlp/KnowledgeMath")
# print the first example on the validation set
print(dataset["validation"][0])
# print the first example on the test set
print(dataset["test"][0])
```
The dataset is provided in json format and contains the following attributes:
```json
{
"question_id": [string] The question id,
"question": [string] The question text,
"tables": [list] List of Markdown-format tables associated with the question,
"python_solution": [string] Python-format and executable solution by financial experts. The code is written in a clear and executable format, with well-named variables and a detailed explanation,
"ground_truth": [integer] Executed result of `python solution`, rounded to three decimal places,
"topic": [string] The related financial area of the question,
"knowledge_terms": [list] List of knowledge terms in our constructed knowledge bank that is necessary to answer the given question. We will release this feature upon paper publication
}
```
## Automated Evaluation
To automatically evaluate a model on **KnowledgeMath**, please refer to our GitHub repository [here](https://github.com/yale-nlp/KnowledgeMath).
## Citation
If you use the **KnowledgeMath** dataset in your work, please kindly cite the paper:
```
@misc{zhao2023knowledgemath,
title={KnowledgeMath: Knowledge-Intensive Math Word Problem Solving in Finance Domains},
author={Yilun Zhao and Hongjun Liu and Yitao Long and Rui Zhang and Chen Zhao and Arman Cohan},
year={2023},
eprint={2311.09797},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
distilled-one-sec-cv12-each-chunk-uniq/chunk_102 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1318092616.0
num_examples: 256838
download_size: 1351808125
dataset_size: 1318092616.0
---
# Dataset Card for "chunk_102"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anirudhlakhotia/baarat-batched-hindi-pre-training | ---
language:
- hi
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 23080432881
num_examples: 8780938
download_size: 9371674517
dataset_size: 23080432881
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Joel0007/vegeta | ---
license: openrail
---
|
minorproj/custom_data | ---
license: apache-2.0
---
|
recastai/flickr30k-augmented-caption | ---
language:
- en
license: cc-by-4.0
pretty_name: Flickr30k-augmented-captions
dataset_info:
features:
- name: prompt
dtype: string
- name: caption
dtype: string
- name: filename
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 156472618
num_examples: 154573
download_size: 74228652
dataset_size: 156472618
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cadene/aloha_sim_transfer_cube_human | ---
license: mit
---
|
multimodalart/v-majesty-diffusion-settings | ---
license: mit
---
---
license: mit
---
A collection of default settings for the text-to-image model [V-Majesty Diffusion](https://github.com/multimodalart/majesty-diffusion#v-majesty-diffusion-v12). If you love your settings, please add yours by going to the `Files and versions` tab and hitting upload.

Also please add a description on what your settings excel (it's okay if they are general purpose too)
 |
qdi0/autotrain-data-pro | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: pro
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pro.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Dietitian",
"target": "As a dietitian, I would like to design a vegetarian recipe for 2 people that has approximate 500 calories per serving and has a low glycemic index. Can you please provide a suggestion?"
},
{
"text": "IT Architect",
"target": "I want you to act as an IT Architect. I will provide some details about the functionality of an application or other digital product, and it will be your job to come up with ways to integrate it into the IT landscape. This could involve analyzing business requirements, performing a gap analysis and mapping the functionality of the new system to the existing IT landscape. Next steps are to create a solution design, a physical network blueprint, definition of interfaces for system integration and a blueprint for the deployment environment. My first request is \"I need help to integrate a CMS system.\""
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 122 |
| valid | 31 |
|
kasvii/face-partuv2beautifulluv-ffhq10-samples | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
- name: control_image
dtype: image
splits:
- name: train
num_bytes: 9517189.0
num_examples: 10
download_size: 0
dataset_size: 9517189.0
---
# Dataset Card for "face-partuv2beautifulluv-ffhq10-samples"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
automated-research-group/gpt2-winogrande_inverted_option | ---
dataset_info:
features:
- name: id
dtype: string
- name: response
dtype: string
- name: request
dtype: string
- name: input_perplexity
dtype: float64
- name: input_likelihood
dtype: float64
- name: output_perplexity
dtype: float64
- name: output_likelihood
dtype: float64
splits:
- name: validation
num_bytes: 357254
num_examples: 1267
download_size: 162698
dataset_size: 357254
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "gpt2-winogrande_inverted_option"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/med_alpaca_standardized_cluster_18 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 51491469
num_examples: 5916
download_size: 13176052
dataset_size: 51491469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_18"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShibaKZH/Remwaifu | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_prithivida__Asimov-7B-v2 | ---
pretty_name: Evaluation run of prithivida/Asimov-7B-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [prithivida/Asimov-7B-v2](https://huggingface.co/prithivida/Asimov-7B-v2) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_prithivida__Asimov-7B-v2\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-03T19:02:27.334666](https://huggingface.co/datasets/open-llm-leaderboard/details_prithivida__Asimov-7B-v2/blob/main/results_2023-12-03T19-02-27.334666.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.10917361637604246,\n\
\ \"acc_stderr\": 0.008590089300511155\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.10917361637604246,\n \"acc_stderr\": 0.008590089300511155\n\
\ }\n}\n```"
repo_url: https://huggingface.co/prithivida/Asimov-7B-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_03T19_02_27.334666
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-02-27.334666.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-03T19-02-27.334666.parquet'
- config_name: results
data_files:
- split: 2023_12_03T19_02_27.334666
path:
- results_2023-12-03T19-02-27.334666.parquet
- split: latest
path:
- results_2023-12-03T19-02-27.334666.parquet
---
# Dataset Card for Evaluation run of prithivida/Asimov-7B-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/prithivida/Asimov-7B-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [prithivida/Asimov-7B-v2](https://huggingface.co/prithivida/Asimov-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_prithivida__Asimov-7B-v2",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:02:27.334666](https://huggingface.co/datasets/open-llm-leaderboard/details_prithivida__Asimov-7B-v2/blob/main/results_2023-12-03T19-02-27.334666.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511155
},
"harness|gsm8k|5": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511155
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ibranze/araproje_arc_en_w3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: validation
num_bytes: 80691.81818181818
num_examples: 250
download_size: 0
dataset_size: 80691.81818181818
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_arc_en_w3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
urikxx/nvgesture | ---
language:
- en
- ru
--- |
jwigginton/timeseries-1mn-sp500 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: datetime
dtype: timestamp[ns]
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 21921693
num_examples: 392924
download_size: 11036974
dataset_size: 21921693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AragonSpace/Vozes | ---
license: openrail
--- |
CWKSC/common_voice_13_0-ja-whisper-tiny | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 11557295928
num_examples: 12032
- name: test
num_bytes: 4765120552
num_examples: 4961
download_size: 0
dataset_size: 16322416480
---
# Dataset Card for "common_voice_13_0-ja-whisper-tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gradio/new_saving_json | ---
configs:
- config_name: default
data_files:
- split: train
path: '**/*.jsonl'
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
anonymoussubmissions/earnings21-gold-transcripts-non-normalized | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: labels
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-CARDINAL
'2': B-DATE
'3': B-EVENT
'4': B-FAC
'5': B-GPE
'6': B-LANGUAGE
'7': B-LAW
'8': B-LOC
'9': B-MONEY
'10': B-NORP
'11': B-ORDINAL
'12': B-ORG
'13': B-PERCENT
'14': B-PERSON
'15': B-PRODUCT
'16': B-QUANTITY
'17': B-TIME
'18': B-WORK_OF_ART
'19': I-CARDINAL
'20': I-DATE
'21': I-EVENT
'22': I-FAC
'23': I-GPE
'24': I-LANGUAGE
'25': I-LAW
'26': I-LOC
'27': I-MONEY
'28': I-NORP
'29': I-ORDINAL
'30': I-ORG
'31': I-PERCENT
'32': I-PERSON
'33': I-PRODUCT
'34': I-QUANTITY
'35': I-TIME
'36': I-WORK_OF_ART
- name: labels_orig
sequence: string
- name: file_id
dtype: string
- name: sentence_no
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4503755
num_examples: 6955
- name: validation
num_bytes: 3019922
num_examples: 4637
- name: test
num_bytes: 4948836
num_examples: 7729
download_size: 1677962
dataset_size: 12472513
---
# Dataset Card for "earnings21-gold-transcripts-non-normalized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
han2lin/squad_small | ---
language:
- en
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 24489106
num_examples: 27045
- name: valid
num_bytes: 3337865
num_examples: 3688
- name: test
num_bytes: 10472984
num_examples: 10570
download_size: 19442674
dataset_size: 38299955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
pgajo/subs-ready | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 26606139128
num_examples: 27702
- name: test
num_bytes: 11403326072
num_examples: 11873
download_size: 4533911345
dataset_size: 38009465200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
cheafdevo56/Influential_MixedNegTypes_10Percent | ---
dataset_info:
features:
- name: query
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: pos
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: title
dtype: string
- name: neg
struct:
- name: abstract
dtype: string
- name: corpus_id
dtype: int64
- name: score
dtype: int64
- name: title
dtype: string
splits:
- name: train
num_bytes: 173896072.2
num_examples: 45000
- name: validation
num_bytes: 19321785.8
num_examples: 5000
download_size: 116170115
dataset_size: 193217858.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
khaxtran/swin-faqs | ---
task_categories:
- question-answering
--- |
sitloboi2012/CMDS_Multimodal_Document | ---
license: apache-2.0
task_categories:
- image-classification
- text-classification
- image-to-text
language:
- bg
tags:
- DocumentAI
- ImageClassification
- SequenceClassification
pretty_name: CMDS Document Images Dataset
size_categories:
- n<1K
---
# Dataset Card for Cyrillic Multimodel Document (CMDS)
This is the dataset consists of 3789 pairs of images and text across 31 categories downloaded from the Bulgarian ministry of finance
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
Uses this dataset for downstream task like Document Classification, Image Classification or Text Classification (Sequences Classification). Suitable for multimodal Model like LayoutLm Family, Donut, etc.
### Languages
Bulgarian
### Data Fields
- __text__ (bytes): the text appear in the document
- __filename__ (str): the name of the file
- __image__ (PIL.Image): the image of the document
- __label__ (str): the label of the document. There are 31 differences labels |
dumyy/text-classification-subject | ---
dataset_info:
features:
- name: content
dtype: string
- name: label
dtype:
class_label:
names:
'0': 文学
'1': 数学
'2': 英文
'3': 物理
'4': 生物
'5': 化学
splits:
- name: train
num_bytes: 319
num_examples: 8
- name: val
num_bytes: 319
num_examples: 8
- name: test
num_bytes: 319
num_examples: 8
download_size: 0
dataset_size: 957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
# Dataset Card for "text-classification-subject"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Chendi/ibm_transactions | ---
license: apache-2.0
---
|
AISimplyExplained/Assignment_RBI_Notifications | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 22037853.115133435
num_examples: 78031
- name: test
num_bytes: 5509533.884866566
num_examples: 19508
download_size: 14761897
dataset_size: 27547387.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gmongaras/book_BERT_512 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 228229039152
num_examples: 74004228
download_size: 2826157131
dataset_size: 228229039152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff at 512 tokens.
Original dataset: https://huggingface.co/datasets/bookcorpus |
collabora/whisperspeech-librilight | ---
license: cc0-1.0
---
This is a processed LibriLight dataset ready for training the WhisperSpeech models.
See [https://github.com/collabora/WhisperSpeech](https://github.com/collabora/WhisperSpeech) for more details.
## Quick start
If you want to quickly train a basic WhisperSpeech model you can start by downloading the small subset:
```bash
# magic includes to download only the small and validation data splits and the accompanying config files
huggingface-cli download --repo-type dataset --include '*-small-*' '*small.dataset' '*-speakers*' --local-dir . -- collabora/whisperspeech-librilight
# download the semantic token model to extract the token embeddings from it
huggingface-cli download collabora/whisperspeech whisper-vq-stoks-medium-en+pl.model
# the T2S training invocation:
python3 -m whisperspeech.train_multi \
--task "t2s_up_wds_mlang_enclm base --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
--batch-size 32 --accumulate-grad-batches 2 \
--epochs 2 --lr-schedule wsd \
--tunables="--cps_input --causal_encoder --warmup_steps=300 --encoder_depth_ratio=.25" \
--dataset-config=--vq_codes=513 \
--training-data @librilight-t2s-train-small.dataset \
--validation-data @librilight-t2s-val-common-speakers.dataset \
--validation-data @librilight-t2s-val-unseen-speakers.dataset \
--monitored-metric 'val_loss/dataloader_idx_0'
# the S2A training invocation:
python3 -m whisperspeech.train_multi \
--task "s2a_delar_mup_wds_mlang tiny --quantizers 4 --spk_width=192 --frozen_embeddings_model whisper-vq-stoks-medium-en+pl.model" \
--batch-size 48 \
--epochs 4 --lr-schedule wsd \
--tunables="--rope --warmup_steps=300" \
--dataset-config=--vq_codes=513 \
--training-data @librilight-s2a-train-small.dataset \
--validation-data @librilight-s2a-val-common-speakers.dataset \
--validation-data @librilight-s2a-val-unseen-speakers.dataset \
--monitored-metric 'val_loss/dataloader_idx_0'
```
The `--accumulate-grad-batches` option is set to get a good effective batch size a single 4090 GPU.
If you have multiple GPUs it will probably make sense to lower the batch size. For example 16 GPUs
with a batch size of 16 seem to be give good performance and fast training.
Because we use Maximum Update Parametrization, higher effective batch sizes always result in lower
losses and you don't need to adjust the learning rate. Unfortunately the effect is not linear so
there is an optimal batch size and there is little benefit to increase it further.
|
Baidicoot/augmented_advbench_v3_filtered | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion_1
dtype: string
- name: completion_2
dtype: string
- name: completion_3
dtype: string
- name: completion_4
dtype: string
- name: completion_5
dtype: string
- name: refusal
dtype: string
- name: generic_refusal
dtype: string
splits:
- name: train
num_bytes: 13417203
num_examples: 5230
download_size: 6843731
dataset_size: 13417203
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_fionazhang__mistral-experiment-6 | ---
pretty_name: Evaluation run of fionazhang/mistral-experiment-6
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fionazhang/mistral-experiment-6](https://huggingface.co/fionazhang/mistral-experiment-6)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fionazhang__mistral-experiment-6\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-29T00:42:51.247635](https://huggingface.co/datasets/open-llm-leaderboard/details_fionazhang__mistral-experiment-6/blob/main/results_2024-01-29T00-42-51.247635.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5538123362399926,\n\
\ \"acc_stderr\": 0.033998687888343836,\n \"acc_norm\": 0.5600824142135805,\n\
\ \"acc_norm_stderr\": 0.034730108194204,\n \"mc1\": 0.29865361077111385,\n\
\ \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.4568589633964796,\n\
\ \"mc2_stderr\": 0.01480166536535197\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5273037542662116,\n \"acc_stderr\": 0.014589589101985996,\n\
\ \"acc_norm\": 0.5580204778156996,\n \"acc_norm_stderr\": 0.014512682523128345\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6227843059151563,\n\
\ \"acc_stderr\": 0.0048369903732615694,\n \"acc_norm\": 0.814479187412866,\n\
\ \"acc_norm_stderr\": 0.003879250555254521\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.42962962962962964,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.42962962962962964,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.039993097127774734,\n\
\ \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.039993097127774734\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6150943396226415,\n \"acc_stderr\": 0.02994649856769995,\n\
\ \"acc_norm\": 0.6150943396226415,\n \"acc_norm_stderr\": 0.02994649856769995\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5902777777777778,\n\
\ \"acc_stderr\": 0.04112490974670787,\n \"acc_norm\": 0.5902777777777778,\n\
\ \"acc_norm_stderr\": 0.04112490974670787\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n\
\ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5433526011560693,\n\
\ \"acc_stderr\": 0.03798106566014498,\n \"acc_norm\": 0.5433526011560693,\n\
\ \"acc_norm_stderr\": 0.03798106566014498\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201942,\n\
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201942\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n\
\ \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5191489361702127,\n \"acc_stderr\": 0.032662042990646796,\n\
\ \"acc_norm\": 0.5191489361702127,\n \"acc_norm_stderr\": 0.032662042990646796\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.39473684210526316,\n\
\ \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.39473684210526316,\n\
\ \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.04164188720169377,\n\
\ \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4021164021164021,\n \"acc_stderr\": 0.025253032554997692,\n \"\
acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.025253032554997692\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3253968253968254,\n\
\ \"acc_stderr\": 0.041905964388711366,\n \"acc_norm\": 0.3253968253968254,\n\
\ \"acc_norm_stderr\": 0.041905964388711366\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6935483870967742,\n\
\ \"acc_stderr\": 0.026226485652553883,\n \"acc_norm\": 0.6935483870967742,\n\
\ \"acc_norm_stderr\": 0.026226485652553883\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3842364532019704,\n \"acc_stderr\": 0.0342239856565755,\n\
\ \"acc_norm\": 0.3842364532019704,\n \"acc_norm_stderr\": 0.0342239856565755\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n\
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6121212121212121,\n \"acc_stderr\": 0.03804913653971012,\n\
\ \"acc_norm\": 0.6121212121212121,\n \"acc_norm_stderr\": 0.03804913653971012\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270285,\n \"\
acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270285\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7875647668393783,\n \"acc_stderr\": 0.029519282616817216,\n\
\ \"acc_norm\": 0.7875647668393783,\n \"acc_norm_stderr\": 0.029519282616817216\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5230769230769231,\n \"acc_stderr\": 0.025323990861736232,\n\
\ \"acc_norm\": 0.5230769230769231,\n \"acc_norm_stderr\": 0.025323990861736232\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35555555555555557,\n \"acc_stderr\": 0.0291857149498574,\n \
\ \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.0291857149498574\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5336134453781513,\n \"acc_stderr\": 0.03240501447690071,\n \
\ \"acc_norm\": 0.5336134453781513,\n \"acc_norm_stderr\": 0.03240501447690071\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7504587155963303,\n \"acc_stderr\": 0.018553897629501628,\n \"\
acc_norm\": 0.7504587155963303,\n \"acc_norm_stderr\": 0.018553897629501628\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538271,\n \"\
acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538271\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6764705882352942,\n \"acc_stderr\": 0.032834720561085606,\n \"\
acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.032834720561085606\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.679324894514768,\n \"acc_stderr\": 0.030381931949990407,\n \
\ \"acc_norm\": 0.679324894514768,\n \"acc_norm_stderr\": 0.030381931949990407\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5560538116591929,\n\
\ \"acc_stderr\": 0.03334625674242728,\n \"acc_norm\": 0.5560538116591929,\n\
\ \"acc_norm_stderr\": 0.03334625674242728\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n\
\ \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5833333333333334,\n\
\ \"acc_stderr\": 0.04766075165356461,\n \"acc_norm\": 0.5833333333333334,\n\
\ \"acc_norm_stderr\": 0.04766075165356461\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6380368098159509,\n \"acc_stderr\": 0.037757007291414416,\n\
\ \"acc_norm\": 0.6380368098159509,\n \"acc_norm_stderr\": 0.037757007291414416\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n\
\ \"acc_stderr\": 0.045218299028335865,\n \"acc_norm\": 0.3482142857142857,\n\
\ \"acc_norm_stderr\": 0.045218299028335865\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.044532548363264673,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.044532548363264673\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7905982905982906,\n\
\ \"acc_stderr\": 0.026655699653922737,\n \"acc_norm\": 0.7905982905982906,\n\
\ \"acc_norm_stderr\": 0.026655699653922737\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7343550446998723,\n\
\ \"acc_stderr\": 0.01579430248788872,\n \"acc_norm\": 0.7343550446998723,\n\
\ \"acc_norm_stderr\": 0.01579430248788872\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6213872832369942,\n \"acc_stderr\": 0.02611374936131034,\n\
\ \"acc_norm\": 0.6213872832369942,\n \"acc_norm_stderr\": 0.02611374936131034\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31843575418994413,\n\
\ \"acc_stderr\": 0.015581008080360276,\n \"acc_norm\": 0.31843575418994413,\n\
\ \"acc_norm_stderr\": 0.015581008080360276\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.027582811415159617,\n\
\ \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.027582811415159617\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n\
\ \"acc_stderr\": 0.026981478043648043,\n \"acc_norm\": 0.6559485530546624,\n\
\ \"acc_norm_stderr\": 0.026981478043648043\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6265432098765432,\n \"acc_stderr\": 0.026915003011380154,\n\
\ \"acc_norm\": 0.6265432098765432,\n \"acc_norm_stderr\": 0.026915003011380154\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778855,\n \
\ \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778855\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4041720990873533,\n\
\ \"acc_stderr\": 0.01253350404649136,\n \"acc_norm\": 0.4041720990873533,\n\
\ \"acc_norm_stderr\": 0.01253350404649136\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5992647058823529,\n \"acc_stderr\": 0.02976826352893311,\n\
\ \"acc_norm\": 0.5992647058823529,\n \"acc_norm_stderr\": 0.02976826352893311\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5473856209150327,\n \"acc_stderr\": 0.02013679091849253,\n \
\ \"acc_norm\": 0.5473856209150327,\n \"acc_norm_stderr\": 0.02013679091849253\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5755102040816327,\n \"acc_stderr\": 0.031642094879429414,\n\
\ \"acc_norm\": 0.5755102040816327,\n \"acc_norm_stderr\": 0.031642094879429414\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7711442786069652,\n\
\ \"acc_stderr\": 0.029705284056772436,\n \"acc_norm\": 0.7711442786069652,\n\
\ \"acc_norm_stderr\": 0.029705284056772436\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.038913644958358175,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.038913644958358175\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7602339181286549,\n \"acc_stderr\": 0.032744852119469564,\n\
\ \"acc_norm\": 0.7602339181286549,\n \"acc_norm_stderr\": 0.032744852119469564\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29865361077111385,\n\
\ \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.4568589633964796,\n\
\ \"mc2_stderr\": 0.01480166536535197\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7379636937647988,\n \"acc_stderr\": 0.012358944431637563\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2221379833206975,\n \
\ \"acc_stderr\": 0.011449986902435321\n }\n}\n```"
repo_url: https://huggingface.co/fionazhang/mistral-experiment-6
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|arc:challenge|25_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|gsm8k|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hellaswag|10_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-29T00-42-51.247635.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-29T00-42-51.247635.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- '**/details_harness|winogrande|5_2024-01-29T00-42-51.247635.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-29T00-42-51.247635.parquet'
- config_name: results
data_files:
- split: 2024_01_29T00_42_51.247635
path:
- results_2024-01-29T00-42-51.247635.parquet
- split: latest
path:
- results_2024-01-29T00-42-51.247635.parquet
---
# Dataset Card for Evaluation run of fionazhang/mistral-experiment-6
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fionazhang/mistral-experiment-6](https://huggingface.co/fionazhang/mistral-experiment-6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fionazhang__mistral-experiment-6",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-29T00:42:51.247635](https://huggingface.co/datasets/open-llm-leaderboard/details_fionazhang__mistral-experiment-6/blob/main/results_2024-01-29T00-42-51.247635.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5538123362399926,
"acc_stderr": 0.033998687888343836,
"acc_norm": 0.5600824142135805,
"acc_norm_stderr": 0.034730108194204,
"mc1": 0.29865361077111385,
"mc1_stderr": 0.016021570613768542,
"mc2": 0.4568589633964796,
"mc2_stderr": 0.01480166536535197
},
"harness|arc:challenge|25": {
"acc": 0.5273037542662116,
"acc_stderr": 0.014589589101985996,
"acc_norm": 0.5580204778156996,
"acc_norm_stderr": 0.014512682523128345
},
"harness|hellaswag|10": {
"acc": 0.6227843059151563,
"acc_stderr": 0.0048369903732615694,
"acc_norm": 0.814479187412866,
"acc_norm_stderr": 0.003879250555254521
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.42962962962962964,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.42962962962962964,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5921052631578947,
"acc_stderr": 0.039993097127774734,
"acc_norm": 0.5921052631578947,
"acc_norm_stderr": 0.039993097127774734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6150943396226415,
"acc_stderr": 0.02994649856769995,
"acc_norm": 0.6150943396226415,
"acc_norm_stderr": 0.02994649856769995
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5902777777777778,
"acc_stderr": 0.04112490974670787,
"acc_norm": 0.5902777777777778,
"acc_norm_stderr": 0.04112490974670787
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5433526011560693,
"acc_stderr": 0.03798106566014498,
"acc_norm": 0.5433526011560693,
"acc_norm_stderr": 0.03798106566014498
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201942,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201942
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5191489361702127,
"acc_stderr": 0.032662042990646796,
"acc_norm": 0.5191489361702127,
"acc_norm_stderr": 0.032662042990646796
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.39473684210526316,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.39473684210526316,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.04164188720169377,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.04164188720169377
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4021164021164021,
"acc_stderr": 0.025253032554997692,
"acc_norm": 0.4021164021164021,
"acc_norm_stderr": 0.025253032554997692
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3253968253968254,
"acc_stderr": 0.041905964388711366,
"acc_norm": 0.3253968253968254,
"acc_norm_stderr": 0.041905964388711366
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6935483870967742,
"acc_stderr": 0.026226485652553883,
"acc_norm": 0.6935483870967742,
"acc_norm_stderr": 0.026226485652553883
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3842364532019704,
"acc_stderr": 0.0342239856565755,
"acc_norm": 0.3842364532019704,
"acc_norm_stderr": 0.0342239856565755
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6121212121212121,
"acc_stderr": 0.03804913653971012,
"acc_norm": 0.6121212121212121,
"acc_norm_stderr": 0.03804913653971012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7323232323232324,
"acc_stderr": 0.03154449888270285,
"acc_norm": 0.7323232323232324,
"acc_norm_stderr": 0.03154449888270285
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7875647668393783,
"acc_stderr": 0.029519282616817216,
"acc_norm": 0.7875647668393783,
"acc_norm_stderr": 0.029519282616817216
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5230769230769231,
"acc_stderr": 0.025323990861736232,
"acc_norm": 0.5230769230769231,
"acc_norm_stderr": 0.025323990861736232
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.0291857149498574,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.0291857149498574
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5336134453781513,
"acc_stderr": 0.03240501447690071,
"acc_norm": 0.5336134453781513,
"acc_norm_stderr": 0.03240501447690071
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7504587155963303,
"acc_stderr": 0.018553897629501628,
"acc_norm": 0.7504587155963303,
"acc_norm_stderr": 0.018553897629501628
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03400603625538271,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03400603625538271
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.032834720561085606,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.032834720561085606
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.679324894514768,
"acc_stderr": 0.030381931949990407,
"acc_norm": 0.679324894514768,
"acc_norm_stderr": 0.030381931949990407
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5560538116591929,
"acc_stderr": 0.03334625674242728,
"acc_norm": 0.5560538116591929,
"acc_norm_stderr": 0.03334625674242728
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04766075165356461,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04766075165356461
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6380368098159509,
"acc_stderr": 0.037757007291414416,
"acc_norm": 0.6380368098159509,
"acc_norm_stderr": 0.037757007291414416
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.045218299028335865,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.045218299028335865
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.044532548363264673,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.044532548363264673
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7905982905982906,
"acc_stderr": 0.026655699653922737,
"acc_norm": 0.7905982905982906,
"acc_norm_stderr": 0.026655699653922737
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7343550446998723,
"acc_stderr": 0.01579430248788872,
"acc_norm": 0.7343550446998723,
"acc_norm_stderr": 0.01579430248788872
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6213872832369942,
"acc_stderr": 0.02611374936131034,
"acc_norm": 0.6213872832369942,
"acc_norm_stderr": 0.02611374936131034
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31843575418994413,
"acc_stderr": 0.015581008080360276,
"acc_norm": 0.31843575418994413,
"acc_norm_stderr": 0.015581008080360276
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.027582811415159617,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.027582811415159617
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648043,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6265432098765432,
"acc_stderr": 0.026915003011380154,
"acc_norm": 0.6265432098765432,
"acc_norm_stderr": 0.026915003011380154
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.029555454236778855,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.029555454236778855
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4041720990873533,
"acc_stderr": 0.01253350404649136,
"acc_norm": 0.4041720990873533,
"acc_norm_stderr": 0.01253350404649136
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5992647058823529,
"acc_stderr": 0.02976826352893311,
"acc_norm": 0.5992647058823529,
"acc_norm_stderr": 0.02976826352893311
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5473856209150327,
"acc_stderr": 0.02013679091849253,
"acc_norm": 0.5473856209150327,
"acc_norm_stderr": 0.02013679091849253
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5755102040816327,
"acc_stderr": 0.031642094879429414,
"acc_norm": 0.5755102040816327,
"acc_norm_stderr": 0.031642094879429414
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7711442786069652,
"acc_stderr": 0.029705284056772436,
"acc_norm": 0.7711442786069652,
"acc_norm_stderr": 0.029705284056772436
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.038913644958358175,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.038913644958358175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7602339181286549,
"acc_stderr": 0.032744852119469564,
"acc_norm": 0.7602339181286549,
"acc_norm_stderr": 0.032744852119469564
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29865361077111385,
"mc1_stderr": 0.016021570613768542,
"mc2": 0.4568589633964796,
"mc2_stderr": 0.01480166536535197
},
"harness|winogrande|5": {
"acc": 0.7379636937647988,
"acc_stderr": 0.012358944431637563
},
"harness|gsm8k|5": {
"acc": 0.2221379833206975,
"acc_stderr": 0.011449986902435321
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Atipico1/webq_test_adversary | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
- name: gpt_answer_sentence
dtype: string
- name: gpt_adv_sentence
dtype: string
- name: is_valid_sentence
dtype: bool
- name: gpt_adv_passage
dtype: string
- name: is_valid_passage
dtype: bool
splits:
- name: train
num_bytes: 14746293
num_examples: 2032
download_size: 8471461
dataset_size: 14746293
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuiseki/onomatopoeia-ja | ---
license: mit
dataset_info:
features:
- name: onomatopoeia_ja
dtype: string
- name: translate_en
sequence: string
- name: details_ja
sequence: string
- name: details_en
sequence: string
splits:
- name: train
num_bytes: 780954
num_examples: 5060
download_size: 334807
dataset_size: 780954
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anarenteriare/dounut-test-dataset-4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 144403962.0
num_examples: 301
download_size: 133427170
dataset_size: 144403962.0
---
# Dataset Card for "dounut-test-dataset-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ibranze/araproje_arc_tr_w3 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: validation
num_bytes: 87300.0
num_examples: 250
download_size: 46973
dataset_size: 87300.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_arc_tr_w3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
enoahjr/twitter_dataset_1713208152 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 649955
num_examples: 1886
download_size: 358863
dataset_size: 649955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rai-sandeep/whitepapppr_5 | ---
dataset_info:
features:
- name: Sequence
dtype: int64
- name: Type
dtype: string
- name: Title
dtype: string
- name: Data
dtype: string
splits:
- name: train
num_bytes: 81095
num_examples: 5
download_size: 46351
dataset_size: 81095
---
# Dataset Card for "whitepapppr_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datahrvoje/twitter_dataset_1713036827 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 26421
num_examples: 59
download_size: 13564
dataset_size: 26421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
phanvancongthanh/all_data | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 22169550234
num_examples: 507079513
download_size: 11449897663
dataset_size: 22169550234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "all_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B | ---
pretty_name: Evaluation run of vicgalle/OpenHermes-Gemma-2B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [vicgalle/OpenHermes-Gemma-2B](https://huggingface.co/vicgalle/OpenHermes-Gemma-2B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-01T01:29:12.773487](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B/blob/main/results_2024-03-01T01-29-12.773487.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3815454988895058,\n\
\ \"acc_stderr\": 0.03415164278877812,\n \"acc_norm\": 0.3844943418441821,\n\
\ \"acc_norm_stderr\": 0.0349216871018852,\n \"mc1\": 0.26193390452876375,\n\
\ \"mc1_stderr\": 0.015392118805015027,\n \"mc2\": 0.416856831067088,\n\
\ \"mc2_stderr\": 0.014988851670951587\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.47952218430034127,\n \"acc_stderr\": 0.014599131353035009,\n\
\ \"acc_norm\": 0.4931740614334471,\n \"acc_norm_stderr\": 0.014610029151379813\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5528779127663812,\n\
\ \"acc_stderr\": 0.004961799358836435,\n \"acc_norm\": 0.7225652260505875,\n\
\ \"acc_norm_stderr\": 0.0044681782736656645\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.43703703703703706,\n\
\ \"acc_stderr\": 0.04284958639753399,\n \"acc_norm\": 0.43703703703703706,\n\
\ \"acc_norm_stderr\": 0.04284958639753399\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.3815789473684211,\n \"acc_stderr\": 0.03953173377749194,\n\
\ \"acc_norm\": 0.3815789473684211,\n \"acc_norm_stderr\": 0.03953173377749194\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.37,\n\
\ \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \
\ \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.39622641509433965,\n \"acc_stderr\": 0.03010279378179119,\n\
\ \"acc_norm\": 0.39622641509433965,\n \"acc_norm_stderr\": 0.03010279378179119\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4097222222222222,\n\
\ \"acc_stderr\": 0.04112490974670787,\n \"acc_norm\": 0.4097222222222222,\n\
\ \"acc_norm_stderr\": 0.04112490974670787\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n\
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3236994219653179,\n\
\ \"acc_stderr\": 0.035676037996391685,\n \"acc_norm\": 0.3236994219653179,\n\
\ \"acc_norm_stderr\": 0.035676037996391685\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237655,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237655\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3446808510638298,\n \"acc_stderr\": 0.03106898596312215,\n\
\ \"acc_norm\": 0.3446808510638298,\n \"acc_norm_stderr\": 0.03106898596312215\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n\
\ \"acc_stderr\": 0.042270544512322004,\n \"acc_norm\": 0.2807017543859649,\n\
\ \"acc_norm_stderr\": 0.042270544512322004\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.38620689655172413,\n \"acc_stderr\": 0.04057324734419035,\n\
\ \"acc_norm\": 0.38620689655172413,\n \"acc_norm_stderr\": 0.04057324734419035\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.26455026455026454,\n \"acc_stderr\": 0.022717467897708628,\n \"\
acc_norm\": 0.26455026455026454,\n \"acc_norm_stderr\": 0.022717467897708628\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3492063492063492,\n\
\ \"acc_stderr\": 0.04263906892795131,\n \"acc_norm\": 0.3492063492063492,\n\
\ \"acc_norm_stderr\": 0.04263906892795131\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.36451612903225805,\n \"acc_stderr\": 0.02737987122994325,\n \"\
acc_norm\": 0.36451612903225805,\n \"acc_norm_stderr\": 0.02737987122994325\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.27586206896551724,\n \"acc_stderr\": 0.03144712581678241,\n \"\
acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.03144712581678241\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\"\
: 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.46060606060606063,\n \"acc_stderr\": 0.03892207016552012,\n\
\ \"acc_norm\": 0.46060606060606063,\n \"acc_norm_stderr\": 0.03892207016552012\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.3686868686868687,\n \"acc_stderr\": 0.034373055019806184,\n \"\
acc_norm\": 0.3686868686868687,\n \"acc_norm_stderr\": 0.034373055019806184\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.41450777202072536,\n \"acc_stderr\": 0.03555300319557673,\n\
\ \"acc_norm\": 0.41450777202072536,\n \"acc_norm_stderr\": 0.03555300319557673\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.34102564102564104,\n \"acc_stderr\": 0.02403548967633506,\n\
\ \"acc_norm\": 0.34102564102564104,\n \"acc_norm_stderr\": 0.02403548967633506\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230175,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230175\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.29831932773109243,\n \"acc_stderr\": 0.02971914287634286,\n\
\ \"acc_norm\": 0.29831932773109243,\n \"acc_norm_stderr\": 0.02971914287634286\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.23178807947019867,\n \"acc_stderr\": 0.03445406271987054,\n \"\
acc_norm\": 0.23178807947019867,\n \"acc_norm_stderr\": 0.03445406271987054\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.48440366972477067,\n \"acc_stderr\": 0.02142689153920805,\n \"\
acc_norm\": 0.48440366972477067,\n \"acc_norm_stderr\": 0.02142689153920805\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.25462962962962965,\n \"acc_stderr\": 0.029711275860005357,\n \"\
acc_norm\": 0.25462962962962965,\n \"acc_norm_stderr\": 0.029711275860005357\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.4019607843137255,\n \"acc_stderr\": 0.03441190023482465,\n \"\
acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.03441190023482465\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.4472573839662447,\n \"acc_stderr\": 0.03236564251614192,\n \
\ \"acc_norm\": 0.4472573839662447,\n \"acc_norm_stderr\": 0.03236564251614192\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3991031390134529,\n\
\ \"acc_stderr\": 0.03286745312567961,\n \"acc_norm\": 0.3991031390134529,\n\
\ \"acc_norm_stderr\": 0.03286745312567961\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.45038167938931295,\n \"acc_stderr\": 0.04363643698524779,\n\
\ \"acc_norm\": 0.45038167938931295,\n \"acc_norm_stderr\": 0.04363643698524779\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.5206611570247934,\n \"acc_stderr\": 0.04560456086387235,\n \"\
acc_norm\": 0.5206611570247934,\n \"acc_norm_stderr\": 0.04560456086387235\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.047128212574267705,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.047128212574267705\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.31901840490797545,\n \"acc_stderr\": 0.03661997551073836,\n\
\ \"acc_norm\": 0.31901840490797545,\n \"acc_norm_stderr\": 0.03661997551073836\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.44660194174757284,\n \"acc_stderr\": 0.04922424153458933,\n\
\ \"acc_norm\": 0.44660194174757284,\n \"acc_norm_stderr\": 0.04922424153458933\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.5897435897435898,\n\
\ \"acc_stderr\": 0.03222414045241108,\n \"acc_norm\": 0.5897435897435898,\n\
\ \"acc_norm_stderr\": 0.03222414045241108\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.4891443167305236,\n\
\ \"acc_stderr\": 0.01787574884024242,\n \"acc_norm\": 0.4891443167305236,\n\
\ \"acc_norm_stderr\": 0.01787574884024242\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.3901734104046243,\n \"acc_stderr\": 0.026261677607806642,\n\
\ \"acc_norm\": 0.3901734104046243,\n \"acc_norm_stderr\": 0.026261677607806642\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.4542483660130719,\n \"acc_stderr\": 0.028509807802626564,\n\
\ \"acc_norm\": 0.4542483660130719,\n \"acc_norm_stderr\": 0.028509807802626564\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4115755627009646,\n\
\ \"acc_stderr\": 0.02795048149440127,\n \"acc_norm\": 0.4115755627009646,\n\
\ \"acc_norm_stderr\": 0.02795048149440127\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.39814814814814814,\n \"acc_stderr\": 0.02723741509459248,\n\
\ \"acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.02723741509459248\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.32978723404255317,\n \"acc_stderr\": 0.028045946942042398,\n \
\ \"acc_norm\": 0.32978723404255317,\n \"acc_norm_stderr\": 0.028045946942042398\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.31681877444589307,\n\
\ \"acc_stderr\": 0.011882349954723013,\n \"acc_norm\": 0.31681877444589307,\n\
\ \"acc_norm_stderr\": 0.011882349954723013\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.024562204314142314,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.024562204314142314\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.3872549019607843,\n \"acc_stderr\": 0.019706875804085637,\n \
\ \"acc_norm\": 0.3872549019607843,\n \"acc_norm_stderr\": 0.019706875804085637\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4727272727272727,\n\
\ \"acc_stderr\": 0.04782001791380063,\n \"acc_norm\": 0.4727272727272727,\n\
\ \"acc_norm_stderr\": 0.04782001791380063\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.3469387755102041,\n \"acc_stderr\": 0.0304725260267265,\n\
\ \"acc_norm\": 0.3469387755102041,\n \"acc_norm_stderr\": 0.0304725260267265\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.47761194029850745,\n\
\ \"acc_stderr\": 0.035319879302087305,\n \"acc_norm\": 0.47761194029850745,\n\
\ \"acc_norm_stderr\": 0.035319879302087305\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \
\ \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.41566265060240964,\n\
\ \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.41566265060240964,\n\
\ \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.03829509868994727,\n\
\ \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.03829509868994727\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26193390452876375,\n\
\ \"mc1_stderr\": 0.015392118805015027,\n \"mc2\": 0.416856831067088,\n\
\ \"mc2_stderr\": 0.014988851670951587\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6511444356748224,\n \"acc_stderr\": 0.013395059320137332\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12130401819560273,\n \
\ \"acc_stderr\": 0.008992888497275591\n }\n}\n```"
repo_url: https://huggingface.co/vicgalle/OpenHermes-Gemma-2B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|arc:challenge|25_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|gsm8k|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hellaswag|10_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T01-29-12.773487.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-01T01-29-12.773487.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- '**/details_harness|winogrande|5_2024-03-01T01-29-12.773487.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-01T01-29-12.773487.parquet'
- config_name: results
data_files:
- split: 2024_03_01T01_29_12.773487
path:
- results_2024-03-01T01-29-12.773487.parquet
- split: latest
path:
- results_2024-03-01T01-29-12.773487.parquet
---
# Dataset Card for Evaluation run of vicgalle/OpenHermes-Gemma-2B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [vicgalle/OpenHermes-Gemma-2B](https://huggingface.co/vicgalle/OpenHermes-Gemma-2B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-01T01:29:12.773487](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenHermes-Gemma-2B/blob/main/results_2024-03-01T01-29-12.773487.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.3815454988895058,
"acc_stderr": 0.03415164278877812,
"acc_norm": 0.3844943418441821,
"acc_norm_stderr": 0.0349216871018852,
"mc1": 0.26193390452876375,
"mc1_stderr": 0.015392118805015027,
"mc2": 0.416856831067088,
"mc2_stderr": 0.014988851670951587
},
"harness|arc:challenge|25": {
"acc": 0.47952218430034127,
"acc_stderr": 0.014599131353035009,
"acc_norm": 0.4931740614334471,
"acc_norm_stderr": 0.014610029151379813
},
"harness|hellaswag|10": {
"acc": 0.5528779127663812,
"acc_stderr": 0.004961799358836435,
"acc_norm": 0.7225652260505875,
"acc_norm_stderr": 0.0044681782736656645
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399,
"acc_norm": 0.43703703703703706,
"acc_norm_stderr": 0.04284958639753399
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3815789473684211,
"acc_stderr": 0.03953173377749194,
"acc_norm": 0.3815789473684211,
"acc_norm_stderr": 0.03953173377749194
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.39622641509433965,
"acc_stderr": 0.03010279378179119,
"acc_norm": 0.39622641509433965,
"acc_norm_stderr": 0.03010279378179119
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4097222222222222,
"acc_stderr": 0.04112490974670787,
"acc_norm": 0.4097222222222222,
"acc_norm_stderr": 0.04112490974670787
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3236994219653179,
"acc_stderr": 0.035676037996391685,
"acc_norm": 0.3236994219653179,
"acc_norm_stderr": 0.035676037996391685
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237655,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237655
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3446808510638298,
"acc_stderr": 0.03106898596312215,
"acc_norm": 0.3446808510638298,
"acc_norm_stderr": 0.03106898596312215
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.042270544512322004,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.042270544512322004
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.38620689655172413,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.38620689655172413,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.26455026455026454,
"acc_stderr": 0.022717467897708628,
"acc_norm": 0.26455026455026454,
"acc_norm_stderr": 0.022717467897708628
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3492063492063492,
"acc_stderr": 0.04263906892795131,
"acc_norm": 0.3492063492063492,
"acc_norm_stderr": 0.04263906892795131
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.36451612903225805,
"acc_stderr": 0.02737987122994325,
"acc_norm": 0.36451612903225805,
"acc_norm_stderr": 0.02737987122994325
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.27586206896551724,
"acc_stderr": 0.03144712581678241,
"acc_norm": 0.27586206896551724,
"acc_norm_stderr": 0.03144712581678241
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.46060606060606063,
"acc_stderr": 0.03892207016552012,
"acc_norm": 0.46060606060606063,
"acc_norm_stderr": 0.03892207016552012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.3686868686868687,
"acc_stderr": 0.034373055019806184,
"acc_norm": 0.3686868686868687,
"acc_norm_stderr": 0.034373055019806184
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.41450777202072536,
"acc_stderr": 0.03555300319557673,
"acc_norm": 0.41450777202072536,
"acc_norm_stderr": 0.03555300319557673
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.34102564102564104,
"acc_stderr": 0.02403548967633506,
"acc_norm": 0.34102564102564104,
"acc_norm_stderr": 0.02403548967633506
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230175,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230175
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.29831932773109243,
"acc_stderr": 0.02971914287634286,
"acc_norm": 0.29831932773109243,
"acc_norm_stderr": 0.02971914287634286
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.23178807947019867,
"acc_stderr": 0.03445406271987054,
"acc_norm": 0.23178807947019867,
"acc_norm_stderr": 0.03445406271987054
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.48440366972477067,
"acc_stderr": 0.02142689153920805,
"acc_norm": 0.48440366972477067,
"acc_norm_stderr": 0.02142689153920805
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.25462962962962965,
"acc_stderr": 0.029711275860005357,
"acc_norm": 0.25462962962962965,
"acc_norm_stderr": 0.029711275860005357
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.03441190023482465,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.03441190023482465
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.4472573839662447,
"acc_stderr": 0.03236564251614192,
"acc_norm": 0.4472573839662447,
"acc_norm_stderr": 0.03236564251614192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.3991031390134529,
"acc_stderr": 0.03286745312567961,
"acc_norm": 0.3991031390134529,
"acc_norm_stderr": 0.03286745312567961
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.45038167938931295,
"acc_stderr": 0.04363643698524779,
"acc_norm": 0.45038167938931295,
"acc_norm_stderr": 0.04363643698524779
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5206611570247934,
"acc_stderr": 0.04560456086387235,
"acc_norm": 0.5206611570247934,
"acc_norm_stderr": 0.04560456086387235
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.047128212574267705,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.047128212574267705
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.31901840490797545,
"acc_stderr": 0.03661997551073836,
"acc_norm": 0.31901840490797545,
"acc_norm_stderr": 0.03661997551073836
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.44660194174757284,
"acc_stderr": 0.04922424153458933,
"acc_norm": 0.44660194174757284,
"acc_norm_stderr": 0.04922424153458933
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.5897435897435898,
"acc_stderr": 0.03222414045241108,
"acc_norm": 0.5897435897435898,
"acc_norm_stderr": 0.03222414045241108
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.4891443167305236,
"acc_stderr": 0.01787574884024242,
"acc_norm": 0.4891443167305236,
"acc_norm_stderr": 0.01787574884024242
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.3901734104046243,
"acc_stderr": 0.026261677607806642,
"acc_norm": 0.3901734104046243,
"acc_norm_stderr": 0.026261677607806642
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4542483660130719,
"acc_stderr": 0.028509807802626564,
"acc_norm": 0.4542483660130719,
"acc_norm_stderr": 0.028509807802626564
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.4115755627009646,
"acc_stderr": 0.02795048149440127,
"acc_norm": 0.4115755627009646,
"acc_norm_stderr": 0.02795048149440127
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.39814814814814814,
"acc_stderr": 0.02723741509459248,
"acc_norm": 0.39814814814814814,
"acc_norm_stderr": 0.02723741509459248
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.32978723404255317,
"acc_stderr": 0.028045946942042398,
"acc_norm": 0.32978723404255317,
"acc_norm_stderr": 0.028045946942042398
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.31681877444589307,
"acc_stderr": 0.011882349954723013,
"acc_norm": 0.31681877444589307,
"acc_norm_stderr": 0.011882349954723013
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.024562204314142314,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.024562204314142314
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.3872549019607843,
"acc_stderr": 0.019706875804085637,
"acc_norm": 0.3872549019607843,
"acc_norm_stderr": 0.019706875804085637
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4727272727272727,
"acc_stderr": 0.04782001791380063,
"acc_norm": 0.4727272727272727,
"acc_norm_stderr": 0.04782001791380063
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.3469387755102041,
"acc_stderr": 0.0304725260267265,
"acc_norm": 0.3469387755102041,
"acc_norm_stderr": 0.0304725260267265
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.47761194029850745,
"acc_stderr": 0.035319879302087305,
"acc_norm": 0.47761194029850745,
"acc_norm_stderr": 0.035319879302087305
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-virology|5": {
"acc": 0.41566265060240964,
"acc_stderr": 0.03836722176598052,
"acc_norm": 0.41566265060240964,
"acc_norm_stderr": 0.03836722176598052
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5263157894736842,
"acc_stderr": 0.03829509868994727,
"acc_norm": 0.5263157894736842,
"acc_norm_stderr": 0.03829509868994727
},
"harness|truthfulqa:mc|0": {
"mc1": 0.26193390452876375,
"mc1_stderr": 0.015392118805015027,
"mc2": 0.416856831067088,
"mc2_stderr": 0.014988851670951587
},
"harness|winogrande|5": {
"acc": 0.6511444356748224,
"acc_stderr": 0.013395059320137332
},
"harness|gsm8k|5": {
"acc": 0.12130401819560273,
"acc_stderr": 0.008992888497275591
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Seongill/squad_adversarial_thres2 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_sent
dtype: string
- name: new_answer_sent
dtype: string
- name: new_answer_chunk
dtype: string
- name: similar_answer
dtype: string
- name: answer_chunk
dtype: string
- name: query_embedding
sequence: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 179441124
num_examples: 22978
download_size: 128700074
dataset_size: 179441124
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
corto-ai/handwritten-text | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 167800178.75
num_examples: 6482
- name: valid
num_bytes: 24887435.0
num_examples: 976
- name: test
num_bytes: 73857843.625
num_examples: 2915
download_size: 265569932
dataset_size: 266545457.375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
reaganjlee/truthful_qa_mc_zh | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: label
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: train
num_bytes: 95395.0
num_examples: 342
- name: validation
num_bytes: 95395.0
num_examples: 342
download_size: 104268
dataset_size: 190790.0
---
# Dataset Card for "truthful_qa_mc_zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gargolito/blogwriter | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
ChengAoShen/emoji_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 477727160.512
num_examples: 80672
download_size: 400526151
dataset_size: 477727160.512
license: mit
---
# Emoji_dataset
This dataset including various emojis to enable training diffusion and other generative model. |
lo1206/Stable-Diffusion | ---
license: openrail
---
|
Manan28/final-test | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
- name: contexts
dtype: string
splits:
- name: test
num_bytes: 66327
num_examples: 20
download_size: 62133
dataset_size: 66327
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Hemg/Deepfake-Audio-Dataset | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Fake
'1': Real
splits:
- name: train
num_bytes: 88205613.0
num_examples: 100
download_size: 85240791
dataset_size: 88205613.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Deepfake-Audio-Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_TomGrc__FN-OpenLLM_2x72B_MoE | ---
pretty_name: Evaluation run of TomGrc/FN-OpenLLM_2x72B_MoE
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TomGrc/FN-OpenLLM_2x72B_MoE](https://huggingface.co/TomGrc/FN-OpenLLM_2x72B_MoE)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TomGrc__FN-OpenLLM_2x72B_MoE\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-06T01:52:10.589662](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FN-OpenLLM_2x72B_MoE/blob/main/results_2024-02-06T01-52-10.589662.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.22887713151218728,\n\
\ \"acc_stderr\": 0.02978691747050183,\n \"acc_norm\": 0.22891869657995814,\n\
\ \"acc_norm_stderr\": 0.03057236693631619,\n \"mc1\": 0.23255813953488372,\n\
\ \"mc1_stderr\": 0.014789157531080514,\n \"mc2\": 0.48471292342924077,\n\
\ \"mc2_stderr\": 0.016304873353404845\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2098976109215017,\n \"acc_stderr\": 0.011900548748047444,\n\
\ \"acc_norm\": 0.2551194539249147,\n \"acc_norm_stderr\": 0.012739038695202104\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.25562636924915355,\n\
\ \"acc_stderr\": 0.004353212146198434,\n \"acc_norm\": 0.2523401712806214,\n\
\ \"acc_norm_stderr\": 0.004334676952703859\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.03785714465066656,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.03785714465066656\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2339622641509434,\n \"acc_stderr\": 0.02605529690115292,\n\
\ \"acc_norm\": 0.2339622641509434,\n \"acc_norm_stderr\": 0.02605529690115292\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \
\ \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.26,\n\
\ \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \
\ \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2023121387283237,\n\
\ \"acc_stderr\": 0.030631145539198813,\n \"acc_norm\": 0.2023121387283237,\n\
\ \"acc_norm_stderr\": 0.030631145539198813\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n\
\ \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\
\ \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n\
\ \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.2482758620689655,\n \"acc_stderr\": 0.036001056927277716,\n\
\ \"acc_norm\": 0.2482758620689655,\n \"acc_norm_stderr\": 0.036001056927277716\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.20899470899470898,\n \"acc_stderr\": 0.02094048156533486,\n \"\
acc_norm\": 0.20899470899470898,\n \"acc_norm_stderr\": 0.02094048156533486\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.16666666666666666,\n\
\ \"acc_stderr\": 0.03333333333333337,\n \"acc_norm\": 0.16666666666666666,\n\
\ \"acc_norm_stderr\": 0.03333333333333337\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.1774193548387097,\n \"acc_stderr\": 0.02173254068932927,\n \"\
acc_norm\": 0.1774193548387097,\n \"acc_norm_stderr\": 0.02173254068932927\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.15270935960591134,\n \"acc_stderr\": 0.02530890453938063,\n \"\
acc_norm\": 0.15270935960591134,\n \"acc_norm_stderr\": 0.02530890453938063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\"\
: 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"\
acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.028697873971860664,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.028697873971860664\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2358974358974359,\n \"acc_stderr\": 0.021525965407408726,\n\
\ \"acc_norm\": 0.2358974358974359,\n \"acc_norm_stderr\": 0.021525965407408726\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2111111111111111,\n \"acc_stderr\": 0.024882116857655075,\n \
\ \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.024882116857655075\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n\
\ \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436776,\n \"\
acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436776\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.1926605504587156,\n \"acc_stderr\": 0.016909276884936094,\n \"\
acc_norm\": 0.1926605504587156,\n \"acc_norm_stderr\": 0.016909276884936094\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.1527777777777778,\n \"acc_stderr\": 0.024536326026134224,\n \"\
acc_norm\": 0.1527777777777778,\n \"acc_norm_stderr\": 0.024536326026134224\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n\
\ \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.30493273542600896,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.30493273542600896,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.22900763358778625,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.22900763358778625,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.24793388429752067,\n \"acc_stderr\": 0.039418975265163025,\n \"\
acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.039418975265163025\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.28703703703703703,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.28703703703703703,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22085889570552147,\n \"acc_stderr\": 0.032591773927421776,\n\
\ \"acc_norm\": 0.22085889570552147,\n \"acc_norm_stderr\": 0.032591773927421776\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.21359223300970873,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.21359223300970873,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.26495726495726496,\n\
\ \"acc_stderr\": 0.028911208802749482,\n \"acc_norm\": 0.26495726495726496,\n\
\ \"acc_norm_stderr\": 0.028911208802749482\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.24393358876117496,\n\
\ \"acc_stderr\": 0.015357212665829468,\n \"acc_norm\": 0.24393358876117496,\n\
\ \"acc_norm_stderr\": 0.015357212665829468\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n\
\ \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n\
\ \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n\
\ \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.023929155517351284,\n\
\ \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.023929155517351284\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.18006430868167203,\n\
\ \"acc_stderr\": 0.021823422857744953,\n \"acc_norm\": 0.18006430868167203,\n\
\ \"acc_norm_stderr\": 0.021823422857744953\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445806,\n\
\ \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445806\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.23404255319148937,\n \"acc_stderr\": 0.025257861359432417,\n \
\ \"acc_norm\": 0.23404255319148937,\n \"acc_norm_stderr\": 0.025257861359432417\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n\
\ \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n\
\ \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.18382352941176472,\n \"acc_stderr\": 0.023529242185193106,\n\
\ \"acc_norm\": 0.18382352941176472,\n \"acc_norm_stderr\": 0.023529242185193106\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2434640522875817,\n \"acc_stderr\": 0.017362473762146634,\n \
\ \"acc_norm\": 0.2434640522875817,\n \"acc_norm_stderr\": 0.017362473762146634\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.23636363636363636,\n\
\ \"acc_stderr\": 0.04069306319721376,\n \"acc_norm\": 0.23636363636363636,\n\
\ \"acc_norm_stderr\": 0.04069306319721376\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n\
\ \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.03036049015401465,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.03036049015401465\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2891566265060241,\n\
\ \"acc_stderr\": 0.03529486801511115,\n \"acc_norm\": 0.2891566265060241,\n\
\ \"acc_norm_stderr\": 0.03529486801511115\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03218093795602357,\n\
\ \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03218093795602357\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23255813953488372,\n\
\ \"mc1_stderr\": 0.014789157531080514,\n \"mc2\": 0.48471292342924077,\n\
\ \"mc2_stderr\": 0.016304873353404845\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.4972375690607735,\n \"acc_stderr\": 0.014052271211616445\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/TomGrc/FN-OpenLLM_2x72B_MoE
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|arc:challenge|25_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|gsm8k|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hellaswag|10_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-06T01-52-10.589662.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-06T01-52-10.589662.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- '**/details_harness|winogrande|5_2024-02-06T01-52-10.589662.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-06T01-52-10.589662.parquet'
- config_name: results
data_files:
- split: 2024_02_06T01_52_10.589662
path:
- results_2024-02-06T01-52-10.589662.parquet
- split: latest
path:
- results_2024-02-06T01-52-10.589662.parquet
---
# Dataset Card for Evaluation run of TomGrc/FN-OpenLLM_2x72B_MoE
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TomGrc/FN-OpenLLM_2x72B_MoE](https://huggingface.co/TomGrc/FN-OpenLLM_2x72B_MoE) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TomGrc__FN-OpenLLM_2x72B_MoE",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-06T01:52:10.589662](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FN-OpenLLM_2x72B_MoE/blob/main/results_2024-02-06T01-52-10.589662.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.22887713151218728,
"acc_stderr": 0.02978691747050183,
"acc_norm": 0.22891869657995814,
"acc_norm_stderr": 0.03057236693631619,
"mc1": 0.23255813953488372,
"mc1_stderr": 0.014789157531080514,
"mc2": 0.48471292342924077,
"mc2_stderr": 0.016304873353404845
},
"harness|arc:challenge|25": {
"acc": 0.2098976109215017,
"acc_stderr": 0.011900548748047444,
"acc_norm": 0.2551194539249147,
"acc_norm_stderr": 0.012739038695202104
},
"harness|hellaswag|10": {
"acc": 0.25562636924915355,
"acc_stderr": 0.004353212146198434,
"acc_norm": 0.2523401712806214,
"acc_norm_stderr": 0.004334676952703859
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.03785714465066656,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.03785714465066656
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2339622641509434,
"acc_stderr": 0.02605529690115292,
"acc_norm": 0.2339622641509434,
"acc_norm_stderr": 0.02605529690115292
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2023121387283237,
"acc_stderr": 0.030631145539198813,
"acc_norm": 0.2023121387283237,
"acc_norm_stderr": 0.030631145539198813
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.26382978723404255,
"acc_stderr": 0.028809989854102973,
"acc_norm": 0.26382978723404255,
"acc_norm_stderr": 0.028809989854102973
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813365,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813365
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2482758620689655,
"acc_stderr": 0.036001056927277716,
"acc_norm": 0.2482758620689655,
"acc_norm_stderr": 0.036001056927277716
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.20899470899470898,
"acc_stderr": 0.02094048156533486,
"acc_norm": 0.20899470899470898,
"acc_norm_stderr": 0.02094048156533486
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.16666666666666666,
"acc_stderr": 0.03333333333333337,
"acc_norm": 0.16666666666666666,
"acc_norm_stderr": 0.03333333333333337
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.1774193548387097,
"acc_stderr": 0.02173254068932927,
"acc_norm": 0.1774193548387097,
"acc_norm_stderr": 0.02173254068932927
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.15270935960591134,
"acc_stderr": 0.02530890453938063,
"acc_norm": 0.15270935960591134,
"acc_norm_stderr": 0.02530890453938063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.17676767676767677,
"acc_stderr": 0.027178752639044915,
"acc_norm": 0.17676767676767677,
"acc_norm_stderr": 0.027178752639044915
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.028697873971860664,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.028697873971860664
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2358974358974359,
"acc_stderr": 0.021525965407408726,
"acc_norm": 0.2358974358974359,
"acc_norm_stderr": 0.021525965407408726
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.024882116857655075,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.024882116857655075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436776,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436776
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.1926605504587156,
"acc_stderr": 0.016909276884936094,
"acc_norm": 0.1926605504587156,
"acc_norm_stderr": 0.016909276884936094
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.1527777777777778,
"acc_stderr": 0.024536326026134224,
"acc_norm": 0.1527777777777778,
"acc_norm_stderr": 0.024536326026134224
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.270042194092827,
"acc_stderr": 0.028900721906293426,
"acc_norm": 0.270042194092827,
"acc_norm_stderr": 0.028900721906293426
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.30493273542600896,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.30493273542600896,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22900763358778625,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.22900763358778625,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.24793388429752067,
"acc_stderr": 0.039418975265163025,
"acc_norm": 0.24793388429752067,
"acc_norm_stderr": 0.039418975265163025
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.28703703703703703,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.28703703703703703,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22085889570552147,
"acc_stderr": 0.032591773927421776,
"acc_norm": 0.22085889570552147,
"acc_norm_stderr": 0.032591773927421776
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3125,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.3125,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.21359223300970873,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.21359223300970873,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.26495726495726496,
"acc_stderr": 0.028911208802749482,
"acc_norm": 0.26495726495726496,
"acc_norm_stderr": 0.028911208802749482
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.24393358876117496,
"acc_stderr": 0.015357212665829468,
"acc_norm": 0.24393358876117496,
"acc_norm_stderr": 0.015357212665829468
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24855491329479767,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.24855491329479767,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23798882681564246,
"acc_stderr": 0.014242630070574915,
"acc_norm": 0.23798882681564246,
"acc_norm_stderr": 0.014242630070574915
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.023929155517351284,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.023929155517351284
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.18006430868167203,
"acc_stderr": 0.021823422857744953,
"acc_norm": 0.18006430868167203,
"acc_norm_stderr": 0.021823422857744953
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.21604938271604937,
"acc_stderr": 0.022899162918445806,
"acc_norm": 0.21604938271604937,
"acc_norm_stderr": 0.022899162918445806
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.23404255319148937,
"acc_stderr": 0.025257861359432417,
"acc_norm": 0.23404255319148937,
"acc_norm_stderr": 0.025257861359432417
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.010996156635142692,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.010996156635142692
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.18382352941176472,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.18382352941176472,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2434640522875817,
"acc_stderr": 0.017362473762146634,
"acc_norm": 0.2434640522875817,
"acc_norm_stderr": 0.017362473762146634
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.23636363636363636,
"acc_stderr": 0.04069306319721376,
"acc_norm": 0.23636363636363636,
"acc_norm_stderr": 0.04069306319721376
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.18775510204081633,
"acc_stderr": 0.02500025603954621,
"acc_norm": 0.18775510204081633,
"acc_norm_stderr": 0.02500025603954621
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.03036049015401465,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.03036049015401465
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322695,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322695
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2891566265060241,
"acc_stderr": 0.03529486801511115,
"acc_norm": 0.2891566265060241,
"acc_norm_stderr": 0.03529486801511115
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03218093795602357,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03218093795602357
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23255813953488372,
"mc1_stderr": 0.014789157531080514,
"mc2": 0.48471292342924077,
"mc2_stderr": 0.016304873353404845
},
"harness|winogrande|5": {
"acc": 0.4972375690607735,
"acc_stderr": 0.014052271211616445
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
irds/lotte_writing_test_search | ---
pretty_name: '`lotte/writing/test/search`'
viewer: false
source_datasets: ['irds/lotte_writing_test']
task_categories:
- text-retrieval
---
# Dataset Card for `lotte/writing/test/search`
The `lotte/writing/test/search` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/writing/test/search).
# Data
This dataset provides:
- `queries` (i.e., topics); count=1,071
- `qrels`: (relevance assessments); count=3,546
- For `docs`, use [`irds/lotte_writing_test`](https://huggingface.co/datasets/irds/lotte_writing_test)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/lotte_writing_test_search', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/lotte_writing_test_search', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Santhanam2021ColBERTv2,
title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia",
journal= "arXiv preprint arXiv:2112.01488",
year = "2021",
url = "https://arxiv.org/abs/2112.01488"
}
```
|
KaiserML/Techie_Urls | ---
dataset_info:
features:
- name: lens_id
dtype: string
- name: title
dtype: string
- name: pdf_urls
sequence: string
- name: domain
sequence: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 105864149
num_examples: 423860
download_size: 53793459
dataset_size: 105864149
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
allenai/mup-full | ---
license:
- odc-by
---
# MuP - Multi Perspective Scientific Document Summarization
Generating summaries of scientific documents is known to be a challenging task. Majority of existing work in summarization assumes only one single best gold summary for each given document. Having only one gold summary negatively impacts our ability to evaluate the quality of summarization systems as writing summaries is a subjective activity. At the same time, annotating multiple gold summaries for scientific documents can be extremely expensive as it requires domain experts to read and understand long scientific documents. This shared task will enable exploring methods for generating multi-perspective summaries. We introduce a novel summarization corpus, leveraging data from scientific peer reviews to capture diverse perspectives from the reader's point of view.
For more information about the dataset please refer to: https://github.com/allenai/mup |
maxolotl/must-c-en-es-01 | ---
dataset_info:
features:
- name: en
dtype: string
- name: es
dtype: string
splits:
- name: train
num_bytes: 59876087
num_examples: 259892
- name: test
num_bytes: 658233
num_examples: 3035
- name: validation
num_bytes: 310169
num_examples: 1309
download_size: 37505201
dataset_size: 60844489
---
# Dataset Card for "must-c-en-es-01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gguichard/wsd_myriade_synth_data_id_label_total | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 51147806.48548672
num_examples: 91188
- name: test
num_bytes: 5683650.514513279
num_examples: 10133
download_size: 14307277
dataset_size: 56831457.0
---
# Dataset Card for "wsd_myriade_synth_data_id_label_total"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/catholic_4800_dataset_20231008_132059 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 760128.0
num_examples: 296
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 52254
dataset_size: 767832.0
---
# Dataset Card for "catholic_4800_dataset_20231008_132059"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_subord_conjunction_doubling | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1034
num_examples: 5
- name: train
num_bytes: 941
num_examples: 3
download_size: 0
dataset_size: 1975
---
# Dataset Card for "MULTI_VALUE_stsb_subord_conjunction_doubling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_rhaymison__Mistral-portuguese-luana-7b | ---
pretty_name: Evaluation run of rhaymison/Mistral-portuguese-luana-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [rhaymison/Mistral-portuguese-luana-7b](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rhaymison__Mistral-portuguese-luana-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-15T13:42:47.837970](https://huggingface.co/datasets/open-llm-leaderboard/details_rhaymison__Mistral-portuguese-luana-7b/blob/main/results_2024-04-15T13-42-47.837970.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6097186136411111,\n\
\ \"acc_stderr\": 0.03316921521336491,\n \"acc_norm\": 0.6149390666065853,\n\
\ \"acc_norm_stderr\": 0.033846138299720704,\n \"mc1\": 0.4186046511627907,\n\
\ \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5934914566391061,\n\
\ \"mc2_stderr\": 0.015351475818963607\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5477815699658704,\n \"acc_stderr\": 0.014544519880633827,\n\
\ \"acc_norm\": 0.6023890784982935,\n \"acc_norm_stderr\": 0.01430175222327954\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6261700856403107,\n\
\ \"acc_stderr\": 0.004828305041904403,\n \"acc_norm\": 0.8257319259111731,\n\
\ \"acc_norm_stderr\": 0.0037856457412359396\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.631578947368421,\n \"acc_stderr\": 0.03925523381052932,\n\
\ \"acc_norm\": 0.631578947368421,\n \"acc_norm_stderr\": 0.03925523381052932\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.028985455652334388,\n\
\ \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.028985455652334388\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n\
\ \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n\
\ \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n\
\ \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n\
\ \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n\
\ \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467381,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467381\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n\
\ \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3862433862433862,\n \"acc_stderr\": 0.025075981767601684,\n \"\
acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.025075981767601684\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n\
\ \"acc_stderr\": 0.04343525428949097,\n \"acc_norm\": 0.38095238095238093,\n\
\ \"acc_norm_stderr\": 0.04343525428949097\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6967741935483871,\n\
\ \"acc_stderr\": 0.026148685930671742,\n \"acc_norm\": 0.6967741935483871,\n\
\ \"acc_norm_stderr\": 0.026148685930671742\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n\
\ \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\"\
: 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n\
\ \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"\
acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5794871794871795,\n \"acc_stderr\": 0.025028610276710862,\n\
\ \"acc_norm\": 0.5794871794871795,\n \"acc_norm_stderr\": 0.025028610276710862\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6260504201680672,\n \"acc_stderr\": 0.031429466378837076,\n\
\ \"acc_norm\": 0.6260504201680672,\n \"acc_norm_stderr\": 0.031429466378837076\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7963302752293578,\n \"acc_stderr\": 0.017266742087630793,\n \"\
acc_norm\": 0.7963302752293578,\n \"acc_norm_stderr\": 0.017266742087630793\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538271,\n \"\
acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538271\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"\
acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.759493670886076,\n \"acc_stderr\": 0.027820781981149685,\n \
\ \"acc_norm\": 0.759493670886076,\n \"acc_norm_stderr\": 0.027820781981149685\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.600896860986547,\n\
\ \"acc_stderr\": 0.032867453125679603,\n \"acc_norm\": 0.600896860986547,\n\
\ \"acc_norm_stderr\": 0.032867453125679603\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n\
\ \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.04453254836326466,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.04453254836326466\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.014866821664709583,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.014866821664709583\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.02524826477424284,\n\
\ \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.02524826477424284\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.45251396648044695,\n\
\ \"acc_stderr\": 0.01664691480443877,\n \"acc_norm\": 0.45251396648044695,\n\
\ \"acc_norm_stderr\": 0.01664691480443877\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.02664327847450875,\n\
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.02664327847450875\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.684887459807074,\n\
\ \"acc_stderr\": 0.026385273703464482,\n \"acc_norm\": 0.684887459807074,\n\
\ \"acc_norm_stderr\": 0.026385273703464482\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409828,\n\
\ \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409828\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778852,\n \
\ \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778852\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4335071707953064,\n\
\ \"acc_stderr\": 0.012656810383983965,\n \"acc_norm\": 0.4335071707953064,\n\
\ \"acc_norm_stderr\": 0.012656810383983965\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.02952009569768776,\n\
\ \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.02952009569768776\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.619281045751634,\n \"acc_stderr\": 0.0196438015579248,\n \
\ \"acc_norm\": 0.619281045751634,\n \"acc_norm_stderr\": 0.0196438015579248\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6775510204081633,\n \"acc_stderr\": 0.029923100563683906,\n\
\ \"acc_norm\": 0.6775510204081633,\n \"acc_norm_stderr\": 0.029923100563683906\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.02619392354445413,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.02619392354445413\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826368,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826368\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4186046511627907,\n\
\ \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5934914566391061,\n\
\ \"mc2_stderr\": 0.015351475818963607\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7774269928966061,\n \"acc_stderr\": 0.011690933809712666\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38817285822592873,\n \
\ \"acc_stderr\": 0.013423607564002737\n }\n}\n```"
repo_url: https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|arc:challenge|25_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|gsm8k|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hellaswag|10_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T13-42-47.837970.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-15T13-42-47.837970.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- '**/details_harness|winogrande|5_2024-04-15T13-42-47.837970.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-15T13-42-47.837970.parquet'
- config_name: results
data_files:
- split: 2024_04_15T13_42_47.837970
path:
- results_2024-04-15T13-42-47.837970.parquet
- split: latest
path:
- results_2024-04-15T13-42-47.837970.parquet
---
# Dataset Card for Evaluation run of rhaymison/Mistral-portuguese-luana-7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [rhaymison/Mistral-portuguese-luana-7b](https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rhaymison__Mistral-portuguese-luana-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-15T13:42:47.837970](https://huggingface.co/datasets/open-llm-leaderboard/details_rhaymison__Mistral-portuguese-luana-7b/blob/main/results_2024-04-15T13-42-47.837970.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6097186136411111,
"acc_stderr": 0.03316921521336491,
"acc_norm": 0.6149390666065853,
"acc_norm_stderr": 0.033846138299720704,
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5934914566391061,
"mc2_stderr": 0.015351475818963607
},
"harness|arc:challenge|25": {
"acc": 0.5477815699658704,
"acc_stderr": 0.014544519880633827,
"acc_norm": 0.6023890784982935,
"acc_norm_stderr": 0.01430175222327954
},
"harness|hellaswag|10": {
"acc": 0.6261700856403107,
"acc_stderr": 0.004828305041904403,
"acc_norm": 0.8257319259111731,
"acc_norm_stderr": 0.0037856457412359396
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.631578947368421,
"acc_stderr": 0.03925523381052932,
"acc_norm": 0.631578947368421,
"acc_norm_stderr": 0.03925523381052932
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6679245283018868,
"acc_stderr": 0.028985455652334388,
"acc_norm": 0.6679245283018868,
"acc_norm_stderr": 0.028985455652334388
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6736111111111112,
"acc_stderr": 0.03921067198982266,
"acc_norm": 0.6736111111111112,
"acc_norm_stderr": 0.03921067198982266
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287533,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287533
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467381,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467381
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3862433862433862,
"acc_stderr": 0.025075981767601684,
"acc_norm": 0.3862433862433862,
"acc_norm_stderr": 0.025075981767601684
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.04343525428949097,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.04343525428949097
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6967741935483871,
"acc_stderr": 0.026148685930671742,
"acc_norm": 0.6967741935483871,
"acc_norm_stderr": 0.026148685930671742
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7151515151515152,
"acc_stderr": 0.03524390844511781,
"acc_norm": 0.7151515151515152,
"acc_norm_stderr": 0.03524390844511781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5794871794871795,
"acc_stderr": 0.025028610276710862,
"acc_norm": 0.5794871794871795,
"acc_norm_stderr": 0.025028610276710862
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465073,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465073
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6260504201680672,
"acc_stderr": 0.031429466378837076,
"acc_norm": 0.6260504201680672,
"acc_norm_stderr": 0.031429466378837076
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7963302752293578,
"acc_stderr": 0.017266742087630793,
"acc_norm": 0.7963302752293578,
"acc_norm_stderr": 0.017266742087630793
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03400603625538271,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03400603625538271
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7696078431372549,
"acc_stderr": 0.029554292605695066,
"acc_norm": 0.7696078431372549,
"acc_norm_stderr": 0.029554292605695066
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.027820781981149685,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.027820781981149685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.600896860986547,
"acc_stderr": 0.032867453125679603,
"acc_norm": 0.600896860986547,
"acc_norm_stderr": 0.032867453125679603
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7175572519083969,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.7175572519083969,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507416,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507416
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.04453254836326466,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.04453254836326466
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.014866821664709583,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.014866821664709583
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6734104046242775,
"acc_stderr": 0.02524826477424284,
"acc_norm": 0.6734104046242775,
"acc_norm_stderr": 0.02524826477424284
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45251396648044695,
"acc_stderr": 0.01664691480443877,
"acc_norm": 0.45251396648044695,
"acc_norm_stderr": 0.01664691480443877
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.02664327847450875,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.02664327847450875
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.684887459807074,
"acc_stderr": 0.026385273703464482,
"acc_norm": 0.684887459807074,
"acc_norm_stderr": 0.026385273703464482
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409828,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409828
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.029555454236778852,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.029555454236778852
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4335071707953064,
"acc_stderr": 0.012656810383983965,
"acc_norm": 0.4335071707953064,
"acc_norm_stderr": 0.012656810383983965
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.02952009569768776,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.02952009569768776
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.619281045751634,
"acc_stderr": 0.0196438015579248,
"acc_norm": 0.619281045751634,
"acc_norm_stderr": 0.0196438015579248
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6775510204081633,
"acc_stderr": 0.029923100563683906,
"acc_norm": 0.6775510204081633,
"acc_norm_stderr": 0.029923100563683906
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.02619392354445413,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.02619392354445413
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826368,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826368
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4186046511627907,
"mc1_stderr": 0.017270015284476855,
"mc2": 0.5934914566391061,
"mc2_stderr": 0.015351475818963607
},
"harness|winogrande|5": {
"acc": 0.7774269928966061,
"acc_stderr": 0.011690933809712666
},
"harness|gsm8k|5": {
"acc": 0.38817285822592873,
"acc_stderr": 0.013423607564002737
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
lucasjca/ProcedimentosSUS3 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 41920700.0
num_examples: 89
download_size: 41553598
dataset_size: 41920700.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rmacek/zib2_common_voice | ---
dataset_info:
features:
- name: audio_file
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 25537
num_examples: 194
download_size: 15730
dataset_size: 25537
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yaasr/bundestagv2 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 74359398844.398
num_examples: 292462
download_size: 107587361654
dataset_size: 74359398844.398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- automatic-speech-recognition
language:
- de
---
# Dataset Card for "bundestagv2"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
This is the second version of our bundestag Dataset and the first one that reached a sufficient quality.
It has been obtained using the .srt files found on the website of "Parlamentsfernsehen Bundestag" and has been further optimized using forced alignment.
### Dataset Summary
Almost 300k Rows of data, filtered to up to 30 seconds of audio.
### Supported Tasks and Leaderboards
This dataset is mostly intended for ASR usage.
## Dataset Structure
### Data Instances
Every instance ranges from 0 to 30 seconds of audio.
### Data Fields
We provide the waveforms and the corresponding transcriptions.
### Data Splits
This is currently one large split, so consider applying train_test_split first before using it for training.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Aligned using oliverguhr/wav2vec2-large-xlsr-53-german-cv9
## Considerations for Using the Data
### Discussion of Biases
There are quite a few dialects present throughout the dataset.
### Other Known Limitations
A lot of the sessions were held online, as such the audio quality varies drastically and is sometimes too bad to ensure good transcriptions. The .srt files used are not flawless, as such there might be a few errors in the dataset. Additionally, the alignment will be off from time to time, resulting in missing speech with the full transcript which could increase hallucinations.
## Additional Information
### Licensing Information
This dataset is currently gated and will remain this as long as we are uncertain about the legality of republishing audio of the German "Bundestag".
|
joelniklaus/legalnero | ---
annotations_creators:
- other
language_creators:
- found
language:
- ro
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: Romanian Named Entity Recognition in the Legal domain (LegalNERo)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
tags:
- legal
---
# Dataset Card for Romanian Named Entity Recognition in the Legal domain (LegalNERo)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://zenodo.org/record/4922385
- **Paper:** Pais, V., Mitrofan, M., Gasan, C. L., Coneschi, V., & Ianov, A. (2021). Named Entity Recognition in the {R}omanian Legal Domain. Proceedings of the Natural Legal Language Processing Workshop 2021, 9–18. https://doi.org/10.18653/v1/2021.nllp-1.2
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established).
### Supported Tasks and Leaderboards
The dataset supports the task of named entity recognition.
### Languages
Since legal documents for LegalNERo are extracted from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/), the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021.
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping.
Rows only containing one word (mostly words such as `\t\t\t`, `\n` or `-----`) have been filtered out.
### Data Fields
The files contain the following data fields
- `file_name`: The file_name of the applicable annotation document
- `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`.
- `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:
- `LEGAL`: Legal reference/resources
- `LOC`: Location
- `ORG`: Organization
- `PER`: Person
- `TIME`: Time reference
- `O`: No entity annotation present
The final tagset (in IOB notation) is the following: `['O', 'B-TIME', 'I-TIME', 'B-LEGAL', 'I-LEGAL', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-PER', 'I-PER']`
### Data Splits
Splits created by Joel Niklaus.
| split | number of documents | number of sentences |
|:---------------|--------------------:|--------------------:|
| train | 296 (80%) | 7552 |
| validation | 37 (10%) | 966 |
| test | 37 (10%) | 907 |
## Dataset Creation
### Curation Rationale
The dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents.
### Source Data
#### Initial Data Collection and Normalization
The LegalNERo corpus consists of 370 documents from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/). In the following we give a short description of the crawling process for the MARCELL-RO corpus.
*The MARCELL-RO corpus "contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and "out of force" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project."* [Link](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/)
#### Who are the source language producers?
The source language producers are presumably politicians and lawyers.
### Annotations
#### Annotation process
*“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […].
Inside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021)
#### Who are the annotators?
*"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI)."*
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
## Additional Information
### Dataset Curators
The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.
Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch); [Github](https://github.com/kapllan)).
### Licensing Information
[Creative Commons Attribution Non Commercial No Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode)
### Citation Information
```
@dataset{pais_vasile_2021_4922385,
author = {Păiș, Vasile and
Mitrofan, Maria and
Gasan, Carol Luca and
Ianov, Alexandru and
Ghiță, Corvin and
Coneschi, Vlad Silviu and
Onuț, Andrei},
title = {{Romanian Named Entity Recognition in the Legal
domain (LegalNERo)}},
month = may,
year = 2021,
publisher = {Zenodo},
doi = {10.5281/zenodo.4922385},
url = {https://doi.org/10.5281/zenodo.4922385}
}
```
```
@inproceedings{pais-etal-2021-named,
author = {Pais, Vasile and Mitrofan, Maria and Gasan, Carol Luca and Coneschi, Vlad and Ianov, Alexandru},
booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2021},
doi = {10.18653/v1/2021.nllp-1.2},
month = {nov},
pages = {9--18},
publisher = {Association for Computational Linguistics},
title = {{Named Entity Recognition in the {R}omanian Legal Domain}},
url = {https://aclanthology.org/2021.nllp-1.2},
year = {2021}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.