datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
gguichard/wsd_myriade_synth_data_gpt4turbo | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2222567
num_examples: 3391
download_size: 473896
dataset_size: 2222567
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wsd_myriade_synth_data_gpt4turbo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/van_gogh_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of van_gogh/ヴァン・ゴッホ/梵高 (Fate/Grand Order)
This is the dataset of van_gogh/ヴァン・ゴッホ/梵高 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `braid, side_braid, blue_eyes, crown_braid, brown_hair, hat, long_hair, orange_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 923.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/van_gogh_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 771.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/van_gogh_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1206 | 1.51 GiB | [Download](https://huggingface.co/datasets/CyberHarem/van_gogh_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/van_gogh_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blue_overalls, puffy_sleeves, solo, sunflower, blush, looking_at_viewer, smile, striped_headwear, upper_body, black_sleeves, open_mouth, holding_flower, striped_clothes, belt, buckle, navel |
| 1 | 13 |  |  |  |  |  | 1girl, looking_at_viewer, puffy_sleeves, small_breasts, smile, solo, sunflower, blue_overalls, belt, open_mouth, holding, navel, blush, yellow_headwear, straw_hat |
| 2 | 10 |  |  |  |  |  | 1girl, belt, black_sleeves, blue_overalls, blush, looking_at_viewer, navel_cutout, solo, striped_clothes, striped_headwear, orange_headwear, buckle, puffy_short_sleeves, open_mouth, simple_background, sweatdrop, upper_body, white_background, zipper_pull_tab, flat_chest, fangs, small_breasts |
| 3 | 10 |  |  |  |  |  | 1girl, blue_skin, smile, solo, black_dress, looking_at_viewer, see-through_sleeves, bare_shoulders, puffy_sleeves, black_headwear, long_sleeves, gloves |
| 4 | 8 |  |  |  |  |  | 1girl, bare_shoulders, looking_at_viewer, small_breasts, solo, smile, highleg_swimsuit, blush, casual_one-piece_swimsuit, navel_cutout, open_mouth, wet, water |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_overalls | puffy_sleeves | solo | sunflower | blush | looking_at_viewer | smile | striped_headwear | upper_body | black_sleeves | open_mouth | holding_flower | striped_clothes | belt | buckle | navel | small_breasts | holding | yellow_headwear | straw_hat | navel_cutout | orange_headwear | puffy_short_sleeves | simple_background | sweatdrop | white_background | zipper_pull_tab | flat_chest | fangs | blue_skin | black_dress | see-through_sleeves | bare_shoulders | black_headwear | long_sleeves | gloves | highleg_swimsuit | casual_one-piece_swimsuit | wet | water |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:----------------|:-------|:------------|:--------|:--------------------|:--------|:-------------------|:-------------|:----------------|:-------------|:-----------------|:------------------|:-------|:---------|:--------|:----------------|:----------|:------------------|:------------|:---------------|:------------------|:----------------------|:--------------------|:------------|:-------------------|:------------------|:-------------|:--------|:------------|:--------------|:----------------------|:-----------------|:-----------------|:---------------|:---------|:-------------------|:----------------------------|:------|:--------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | X | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | | X | | X | X | | X | X | X | X | | X | X | X | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | |
| 4 | 8 |  |  |  |  |  | X | | | X | | X | X | X | | | | X | | | | | | X | | | | X | | | | | | | | | | | | X | | | | X | X | X | X |
|
goodfellowliu/General100 | ---
license: openrail
---
|
AdapterOcean/med_alpaca_standardized_cluster_41_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 12131085
num_examples: 7395
download_size: 6484183
dataset_size: 12131085
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_41_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hippocrates/CochranePLS | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6321
num_examples: 1
- name: test
num_bytes: 1279406
num_examples: 200
download_size: 669610
dataset_size: 1285727
---
# Dataset Card for "CochranePLS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
aixsatoshi/Longcontext-aozora-instruction | ---
license: cc
language:
- ja
---
長文用のinstructionデータセットです。
長文は以下の青空文庫データセットを利用しました。
[globis-university/aozorabunko-clean](https://huggingface.co/datasets/globis-university/aozorabunko-clean)
# Limitation
このデータセットは、長文の質問応答**スタイル**を提示することを主な目的としています。
質問応答の正誤についてのフィルタリングは**あえて行っていません。**
長文では一般に性能低下が認められるため困難なタスクとなります。
フィルタリングすると**困難なタスクのinstructionが消えてしまうためです**。
ファインチューニングで使用する場合は、チューニングする基盤モデルの性能によって、チューニング効果が大きく変わります。
正答できるかどうかはモデルパラメータ、事前学習次第と考えられます。
# License
CC BY 4.0
|
mask-distilled-one-sec-cv12/chunk_3 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 993082576
num_examples: 195028
download_size: 1005792153
dataset_size: 993082576
---
# Dataset Card for "chunk_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MarcoS11/Kaveh | ---
license: other
---
|
ThanhNX/Object_direction | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: question
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 201
num_examples: 3
download_size: 1908
dataset_size: 201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tessiw/german_OpenOrca7 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 420136780
num_examples: 250000
download_size: 241313289
dataset_size: 420136780
---
# Dataset Card for "german_OpenOrca7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
emny/indo_squad_dev_v2 | ---
license: apache-2.0
---
|
CyberHarem/remilia_scarlet_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of remilia_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou)
This is the dataset of remilia_scarlet/レミリア・スカーレット/레밀리아스칼렛 (Touhou), containing 500 images and their tags.
The core tags of this character are `short_hair, red_eyes, bat_wings, wings, hat, ribbon, mob_cap, blue_hair, hat_ribbon, bangs, hair_between_eyes, red_ribbon, bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 779.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 443.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1251 | 935.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 692.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1251 | 1.29 GiB | [Download](https://huggingface.co/datasets/CyberHarem/remilia_scarlet_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/remilia_scarlet_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, closed_mouth, frilled_shirt_collar, looking_at_viewer, puffy_short_sleeves, red_ascot, solo, wrist_cuffs, brooch, smile, pink_dress, pink_headwear, pink_shirt, purple_hair, red_bow, skirt, blush, cowboy_shot, sash, simple_background, white_background |
| 1 | 10 |  |  |  |  |  | 1girl, looking_at_viewer, puffy_short_sleeves, solo, white_dress, red_ascot, simple_background, upper_body, white_background, white_headwear, wrist_cuffs, brooch, red_bow, blush, frilled_shirt_collar, nail_polish, smile |
| 2 | 8 |  |  |  |  |  | 1girl, brooch, looking_at_viewer, open_mouth, pink_dress, puffy_short_sleeves, solo, fang, pink_headwear, wrist_cuffs, blush, simple_background, smile, upper_body, white_background, frills, grey_background, hands_up, red_ascot, red_bow, v-shaped_eyebrows |
| 3 | 6 |  |  |  |  |  | 1girl, blush, frilled_shirt_collar, pink_dress, red_ascot, solo, brooch, looking_at_viewer, simple_background, white_background, no_headwear, artist_name, puffy_short_sleeves, upper_body |
| 4 | 8 |  |  |  |  |  | 1girl, full_moon, looking_at_viewer, solo, wrist_cuffs, red_moon, bat_(animal), brooch, puffy_short_sleeves, red_ascot, spear_the_gungnir, frilled_shirt_collar, red_bow, dress, skirt_set, frilled_sleeves, holding_weapon, night_sky, open_mouth, pointy_ears, polearm, purple_hair, red_nails, smile, star_(sky), white_headwear, white_skirt |
| 5 | 10 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, wrist_cuffs, ascot, dress, puffy_short_sleeves, skirt_set, moon, sash, spear_the_gungnir |
| 6 | 6 |  |  |  |  |  | 1girl, dress, solo, wrist_cuffs, smile, ascot |
| 7 | 6 |  |  |  |  |  | 1girl, blush, looking_at_viewer, pointy_ears, simple_background, smile, solo, white_background, cowboy_shot, dress, standing, juliet_sleeves, neck_ribbon, shirt, skirt, black_thighhighs, center_frills, closed_mouth, hat_bow, medium_breasts, zettai_ryouiki |
| 8 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, sitting, solo, bare_shoulders, small_breasts, pillow, white_panties, black_thighhighs, canopy_bed, corset, flower, lips, pantyshot, pointy_ears, underwear_only, white_gloves, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | closed_mouth | frilled_shirt_collar | looking_at_viewer | puffy_short_sleeves | red_ascot | solo | wrist_cuffs | brooch | smile | pink_dress | pink_headwear | pink_shirt | purple_hair | red_bow | skirt | blush | cowboy_shot | sash | simple_background | white_background | white_dress | upper_body | white_headwear | nail_polish | open_mouth | fang | frills | grey_background | hands_up | v-shaped_eyebrows | no_headwear | artist_name | full_moon | red_moon | bat_(animal) | spear_the_gungnir | dress | skirt_set | frilled_sleeves | holding_weapon | night_sky | pointy_ears | polearm | red_nails | star_(sky) | white_skirt | ascot | moon | standing | juliet_sleeves | neck_ribbon | shirt | black_thighhighs | center_frills | hat_bow | medium_breasts | zettai_ryouiki | sitting | bare_shoulders | small_breasts | pillow | white_panties | canopy_bed | corset | flower | lips | pantyshot | underwear_only | white_gloves | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-----------------------|:--------------------|:----------------------|:------------|:-------|:--------------|:---------|:--------|:-------------|:----------------|:-------------|:--------------|:----------|:--------|:--------|:--------------|:-------|:--------------------|:-------------------|:--------------|:-------------|:-----------------|:--------------|:-------------|:-------|:---------|:------------------|:-----------|:--------------------|:--------------|:--------------|:------------|:-----------|:---------------|:--------------------|:--------|:------------|:------------------|:-----------------|:------------|:--------------|:----------|:------------|:-------------|:--------------|:--------|:-------|:-----------|:-----------------|:--------------|:--------|:-------------------|:----------------|:----------|:-----------------|:-----------------|:----------|:-----------------|:----------------|:---------|:----------------|:-------------|:---------|:---------|:-------|:------------|:-----------------|:---------------|:-------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | | X | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | X | X | X | X | X | X | X | X | X | | | X | | X | | | X | X | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | X | X | X | | X | | X | | | | | | X | | | X | X | | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | | | | X | X | | | | | | | | | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | | | X | X | | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | | | | | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | | X | | | X | | | X | | | | | | X | X | X | | X | X | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
rkreddyp/cve-all | ---
license: mit
---
|
witchling22/tokenized_dataset_bart_fblarge | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: source
dtype: string
- name: source_labels
dtype: string
- name: rouge_scores
dtype: string
- name: paper_id
dtype: string
- name: target
dtype: string
- name: full_source_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 17340567
num_examples: 1992
- name: test
num_bytes: 5620222
num_examples: 618
- name: validation
num_bytes: 5534448
num_examples: 619
download_size: 6328102
dataset_size: 28495237
---
# Dataset Card for "tokenized_dataset_bart_fblarge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dhuynh95/Magicoder-Evol-Instruct-110K-Filtered_0.5-Special-Token | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10500318
num_examples: 5000
download_size: 5538939
dataset_size: 10500318
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
umuth/keywording2 | ---
license: apache-2.0
task_categories:
- image-feature-extraction
language:
- en
tags:
- image captioning
size_categories:
- n<1K
--- |
anan-2024/twitter_dataset_1713109531 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 29639
num_examples: 74
download_size: 17207
dataset_size: 29639
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ar_sarcasm | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-semeval_2017
- extended|other-astd
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: ArSarcasm
tags:
- sarcasm-detection
dataset_info:
features:
- name: dialect
dtype:
class_label:
names:
'0': egypt
'1': gulf
'2': levant
'3': magreb
'4': msa
- name: sarcasm
dtype:
class_label:
names:
'0': non-sarcastic
'1': sarcastic
- name: sentiment
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
- name: original_sentiment
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
- name: tweet
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1829159
num_examples: 8437
- name: test
num_bytes: 458210
num_examples: 2110
download_size: 1180619
dataset_size: 2287369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for ArSarcasm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub](https://github.com/iabufarha/ArSarcasm)
- **Paper:** https://www.aclweb.org/anthology/2020.osact-1.5/
### Dataset Summary
ArSarcasm is a new Arabic sarcasm detection dataset.
The dataset was created using previously available Arabic sentiment analysis
datasets ([SemEval 2017](https://www.aclweb.org/anthology/S17-2088.pdf)
and [ASTD](https://www.aclweb.org/anthology/D15-1299.pdf)) and adds sarcasm and
dialect labels to them.
The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.
For more details, please check the paper
[From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset](https://www.aclweb.org/anthology/2020.osact-1.5/)
### Supported Tasks and Leaderboards
You can get more information about an Arabic sarcasm tasks and leaderboard
[here](https://sites.google.com/view/ar-sarcasm-sentiment-detection/).
### Languages
Arabic (multiple dialects)
## Dataset Structure
### Data Instances
```javascript
{'dialect': 1, 'original_sentiment': 0, 'sarcasm': 0, 'sentiment': 0, 'source': 'semeval', 'tweet': 'نصيحه ما عمرك اتنزل لعبة سوبر ماريو مش زي ما كنّا متوقعين الله يرحم ايامات السيقا والفاميلي #SuperMarioRun'}
```
### Data Fields
- tweet: the original tweet text
- sarcasm: 0 for non-sarcastic, 1 for sarcastic
- sentiment: 0 for negative, 1 for neutral, 2 for positive
- original_sentiment: 0 for negative, 1 for neutral, 2 for positive
- source: the original source of tweet: SemEval or ASTD
- dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA)
### Data Splits
The training set contains 8,437 tweets, while the test set contains 2,110 tweets.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them.
#### Who are the source language producers?
SemEval 2017 and ASTD
### Annotations
#### Annotation process
For the annotation process, we used Figure-Eight
crowdsourcing platform. Our main objective was to annotate the
data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for
sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the
annotators were asked to provide three labels for each tweet
as the following:
- Sarcasm: sarcastic or non-sarcastic.
- Sentiment: positive, negative or neutral.
- Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA).
#### Who are the annotators?
Figure-Eight crowdsourcing platform
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Ibrahim Abu-Farha
- Walid Magdy
### Licensing Information
MIT
### Citation Information
```
@inproceedings{abu-farha-magdy-2020-arabic,
title = "From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset",
author = "Abu Farha, Ibrahim and Magdy, Walid",
booktitle = "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resource Association",
url = "https://www.aclweb.org/anthology/2020.osact-1.5",
pages = "32--39",
language = "English",
ISBN = "979-10-95546-51-1",
}
```
### Contributions
Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset. |
open-llm-leaderboard/details_openbmb__UltraRM-13b | ---
pretty_name: Evaluation run of openbmb/UltraRM-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [openbmb/UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openbmb__UltraRM-13b\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T13:26:56.823138](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__UltraRM-13b/blob/main/results_2023-12-02T13-26-56.823138.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"\
acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \
\ \"acc_stderr\": 0.0\n }\n}\n```"
repo_url: https://huggingface.co/openbmb/UltraRM-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|arc:challenge|25_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T08_13_56.124311
path:
- '**/details_harness|drop|3_2023-10-24T08-13-56.124311.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T08-13-56.124311.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T08_13_56.124311
path:
- '**/details_harness|gsm8k|5_2023-10-24T08-13-56.124311.parquet'
- split: 2023_12_02T13_26_56.823138
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-26-56.823138.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T13-26-56.823138.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hellaswag|10_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-08T20-45-47.827028.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-08T20-45-47.827028.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-08T20-45-47.827028.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T08_13_56.124311
path:
- '**/details_harness|winogrande|5_2023-10-24T08-13-56.124311.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T08-13-56.124311.parquet'
- config_name: results
data_files:
- split: 2023_10_08T20_45_47.827028
path:
- results_2023-10-08T20-45-47.827028.parquet
- split: 2023_10_24T08_13_56.124311
path:
- results_2023-10-24T08-13-56.124311.parquet
- split: 2023_12_02T13_26_56.823138
path:
- results_2023-12-02T13-26-56.823138.parquet
- split: latest
path:
- results_2023-12-02T13-26-56.823138.parquet
---
# Dataset Card for Evaluation run of openbmb/UltraRM-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openbmb/UltraRM-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [openbmb/UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openbmb__UltraRM-13b",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T13:26:56.823138](https://huggingface.co/datasets/open-llm-leaderboard/details_openbmb__UltraRM-13b/blob/main/results_2023-12-02T13-26-56.823138.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
varun-v-rao/squad | ---
task_categories:
- question-answering
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79061690.62181075
num_examples: 87285
- name: validation
num_bytes: 10388764.166508988
num_examples: 10485
download_size: 16137496
dataset_size: 89450454.78831974
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
|
autoevaluate/autoeval-staging-eval-project-d05a5ffd-7544861 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- trec
eval_info:
task: multi_class_classification
model: aychang/distilbert-base-cased-trec-coarse
metrics: []
dataset_name: trec
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label-coarse
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: aychang/distilbert-base-cased-trec-coarse
* Dataset: trec
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Digital-nimbus/llama-2-oai-function-calling | ---
license: mit
---
|
negengec/sentinel_trajectory_ist_mont | ---
license: cc-by-4.0
task_categories:
- image-segmentation
tags:
- remote-sensing
- satellite-imagery
- road-detection
- sensor-fusion
- montreal
- istanbul
- gps-trajectory
- sentinel
--- |
bigcode/guanaco-commits | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 17347601.0
num_examples: 12958
- name: test
num_bytes: 827046.0
num_examples: 629
download_size: 10948498
dataset_size: 18174647.0
---
# Dataset Card for "guanaco-commits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
terrycm/alpaca_gpt4_data_es | ---
license: apache-2.0
---
|
sbmaruf/forai_ml-ted_talk_iwslt | ---
license: cc-by-nc-nd-4.0
---
Unofficial version of https://huggingface.co/datasets/ted_talks_iwslt
We created a different data loader for a `@forai_ml` project. |
irds/wikiclir_fi | ---
pretty_name: '`wikiclir/fi`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `wikiclir/fi`
The `wikiclir/fi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fi).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=418,677
- `queries` (i.e., topics); count=273,819
- `qrels`: (relevance assessments); count=939,613
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_fi', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_fi', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_fi', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
suolyer/pile_nih-exporter | ---
license: apache-2.0
---
|
kyujinpy/OpenOrca-KO | ---
language:
- ko
license: mit
size_categories:
- 10K<n<50K
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 44220539
num_examples: 21632
download_size: 22811589
dataset_size: 44220539
---
# OpenOrca-KO
- OpenOrca dataset 중 약 2만개를 sampling하여 번역한 데이터셋
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
## Dataset inf0
1. **NIV** // 1571개
2. **FLAN** // 9434개
3. **T0** // 6351개
4. **CoT** // 2117개
5. **[KoCoT](https://huggingface.co/datasets/kyujinpy/KoCoT_2000)** // 2159개
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |
systemk/popular-code | ---
dataset_info:
- config_name: assembly
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 185152071.87902868
num_examples: 22311
- name: test
num_bytes: 20580751.120971322
num_examples: 2480
download_size: 67003168
dataset_size: 205732823.0
- config_name: batchfile
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 20075860.935047586
num_examples: 21560
- name: test
num_bytes: 2231065.0649524126
num_examples: 2396
download_size: 9992348
dataset_size: 22306926.0
- config_name: c
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: float64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5067481722.438739
num_examples: 768311
- name: test
num_bytes: 563054257.5612613
num_examples: 85368
download_size: 1959592942
dataset_size: 5630535980.0
- config_name: c-sharp
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: float64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 4592570709.738392
num_examples: 972115
- name: test
num_bytes: 510286684.2616079
num_examples: 108013
download_size: 1518897077
dataset_size: 5102857394.0
- config_name: cmake
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 32188311.28346837
num_examples: 16773
- name: test
num_bytes: 3577118.7165316306
num_examples: 1864
download_size: 14604243
dataset_size: 35765430.0
- config_name: cpp
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: float64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 4768231984.89816
num_examples: 571816
- name: test
num_bytes: 529810966.1018396
num_examples: 63536
download_size: 1728320661
dataset_size: 5298042951.0
- config_name: css
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 848462306.4871161
num_examples: 244944
- name: test
num_bytes: 94277053.51288392
num_examples: 27217
download_size: 307876548
dataset_size: 942739360.0
- config_name: default
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 60424176321.07302
num_examples: 13520621
- name: test
num_bytes: 6713800844.926977
num_examples: 1502292
download_size: 25214779682
dataset_size: 67137977166.0
- config_name: dockerfile
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 40861170.9
num_examples: 51435
- name: test
num_bytes: 4540130.1
num_examples: 5715
download_size: 22383514
dataset_size: 45401301.0
- config_name: fortran
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 116444383.79079287
num_examples: 14291
- name: test
num_bytes: 12939170.209207129
num_examples: 1588
download_size: 43286400
dataset_size: 129383554.0
- config_name: go
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 2207276664.435403
num_examples: 423046
- name: test
num_bytes: 245257600.5645971
num_examples: 47006
download_size: 825463406
dataset_size: 2452534265.0
- config_name: haskell
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 208607302.55979314
num_examples: 48730
- name: test
num_bytes: 23180967.44020685
num_examples: 5415
download_size: 86706642
dataset_size: 231788270.0
- config_name: html
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 2892607238.080207
num_examples: 296996
- name: test
num_bytes: 321405132.91979295
num_examples: 33000
download_size: 1019975731
dataset_size: 3214012371.0
- config_name: java
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 8559534877.711628
num_examples: 1806459
- name: test
num_bytes: 951061010.2883722
num_examples: 200718
download_size: 3173260121
dataset_size: 9510595888.0
- config_name: javascript
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5891105157.969938
num_examples: 1758985
- name: test
num_bytes: 654567984.030062
num_examples: 195443
download_size: 2451299575
dataset_size: 6545673142.0
- config_name: julia
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 118357902.97670639
num_examples: 26582
- name: test
num_bytes: 13152857.023293607
num_examples: 2954
download_size: 50895242
dataset_size: 131510760.0
- config_name: lua
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 285155371.2303212
num_examples: 49450
- name: test
num_bytes: 31687133.76967877
num_examples: 5495
download_size: 109173752
dataset_size: 316842505.0
- config_name: makefile
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 95894814.03596312
num_examples: 59160
- name: test
num_bytes: 10656059.964036876
num_examples: 6574
download_size: 42748861
dataset_size: 106550874.0
- config_name: markdown
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 8089968231.110576
num_examples: 1892635
- name: test
num_bytes: 898886308.8894247
num_examples: 210293
download_size: 4412277013
dataset_size: 8988854540.0
- config_name: perl
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 165252209.2220307
num_examples: 32894
- name: test
num_bytes: 18361914.7779693
num_examples: 3655
download_size: 73056742
dataset_size: 183614124.0
- config_name: php
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 4670622386.956541
num_examples: 1411470
- name: test
num_bytes: 518961352.0434591
num_examples: 156831
download_size: 1831244889
dataset_size: 5189583739.0
- config_name: powershell
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 107302772.56389657
num_examples: 24085
- name: test
num_bytes: 11926490.43610343
num_examples: 2677
download_size: 40851866
dataset_size: 119229263.0
- config_name: python
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 5828118254.741786
num_examples: 1157997
- name: test
num_bytes: 647572050.2582142
num_examples: 128667
download_size: 2401359464
dataset_size: 6475690305.0
- config_name: ruby
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 683527631.192737
num_examples: 305128
- name: test
num_bytes: 75949505.80726303
num_examples: 33904
download_size: 294077915
dataset_size: 759477137.0
- config_name: rust
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 736780485.8990481
num_examples: 124241
- name: test
num_bytes: 81867134.10095185
num_examples: 13805
download_size: 255662703
dataset_size: 818647620.0
- config_name: scala
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 405211398.3278998
num_examples: 122020
- name: test
num_bytes: 45024226.672100194
num_examples: 13558
download_size: 170432747
dataset_size: 450235625.0
- config_name: shell
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 292522541.8715689
num_examples: 198568
- name: test
num_bytes: 32503814.128431052
num_examples: 22064
download_size: 151818390
dataset_size: 325026356.0
- config_name: sql
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 423121499.70790017
num_examples: 87787
- name: test
num_bytes: 47017784.29209981
num_examples: 9755
download_size: 135894444
dataset_size: 470139284.0
- config_name: tex
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 521708011.01700556
num_examples: 47049
- name: test
num_bytes: 57971252.98299444
num_examples: 5228
download_size: 244760184
dataset_size: 579679264.0
- config_name: typescript
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 2497822556.7624574
num_examples: 949259
- name: test
num_bytes: 277537886.23754257
num_examples: 105474
download_size: 1039368886
dataset_size: 2775360443.0
- config_name: visual-basic
features:
- name: max_stars_repo_path
dtype: string
- name: max_stars_repo_name
dtype: string
- name: max_stars_count
dtype: int64
- name: id
dtype: string
- name: content
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 72145624.30813125
num_examples: 14510
- name: test
num_bytes: 8020047.691868759
num_examples: 1613
download_size: 24980666
dataset_size: 80165672.0
configs:
- config_name: assembly
data_files:
- split: train
path: assembly/train-*
- split: test
path: assembly/test-*
- config_name: batchfile
data_files:
- split: train
path: batchfile/train-*
- split: test
path: batchfile/test-*
- config_name: c
data_files:
- split: train
path: c/train-*
- split: test
path: c/test-*
- config_name: c-sharp
data_files:
- split: train
path: c-sharp/train-*
- split: test
path: c-sharp/test-*
- config_name: cmake
data_files:
- split: train
path: cmake/train-*
- split: test
path: cmake/test-*
- config_name: cpp
data_files:
- split: train
path: cpp/train-*
- split: test
path: cpp/test-*
- config_name: css
data_files:
- split: train
path: css/train-*
- split: test
path: css/test-*
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: dockerfile
data_files:
- split: train
path: dockerfile/train-*
- split: test
path: dockerfile/test-*
- config_name: fortran
data_files:
- split: train
path: fortran/train-*
- split: test
path: fortran/test-*
- config_name: go
data_files:
- split: train
path: go/train-*
- split: test
path: go/test-*
- config_name: haskell
data_files:
- split: train
path: haskell/train-*
- split: test
path: haskell/test-*
- config_name: html
data_files:
- split: train
path: html/train-*
- split: test
path: html/test-*
- config_name: java
data_files:
- split: train
path: java/train-*
- split: test
path: java/test-*
- config_name: javascript
data_files:
- split: train
path: javascript/train-*
- split: test
path: javascript/test-*
- config_name: julia
data_files:
- split: train
path: julia/train-*
- split: test
path: julia/test-*
- config_name: lua
data_files:
- split: train
path: lua/train-*
- split: test
path: lua/test-*
- config_name: makefile
data_files:
- split: train
path: makefile/train-*
- split: test
path: makefile/test-*
- config_name: markdown
data_files:
- split: train
path: markdown/train-*
- split: test
path: markdown/test-*
- config_name: perl
data_files:
- split: train
path: perl/train-*
- split: test
path: perl/test-*
- config_name: php
data_files:
- split: train
path: php/train-*
- split: test
path: php/test-*
- config_name: powershell
data_files:
- split: train
path: powershell/train-*
- split: test
path: powershell/test-*
- config_name: python
data_files:
- split: train
path: python/train-*
- split: test
path: python/test-*
- config_name: ruby
data_files:
- split: train
path: ruby/train-*
- split: test
path: ruby/test-*
- config_name: rust
data_files:
- split: train
path: rust/train-*
- split: test
path: rust/test-*
- config_name: scala
data_files:
- split: train
path: scala/train-*
- split: test
path: scala/test-*
- config_name: shell
data_files:
- split: train
path: shell/train-*
- split: test
path: shell/test-*
- config_name: sql
data_files:
- split: train
path: sql/train-*
- split: test
path: sql/test-*
- config_name: tex
data_files:
- split: train
path: tex/train-*
- split: test
path: tex/test-*
- config_name: typescript
data_files:
- split: train
path: typescript/train-*
- split: test
path: typescript/test-*
- config_name: visual-basic
data_files:
- split: train
path: visual-basic/train-*
- split: test
path: visual-basic/test-*
---
|
ShenaoZ/0.0_ablation_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: train_prefs_1
num_bytes: 168081294
num_examples: 20378
- name: test_prefs_1
num_bytes: 16410846
num_examples: 2000
- name: train_prefs_2
num_bytes: 176148314
num_examples: 20378
- name: test_prefs_2
num_bytes: 17111476
num_examples: 2000
download_size: 208416423
dataset_size: 377751930
configs:
- config_name: default
data_files:
- split: train_prefs_1
path: data/train_prefs_1-*
- split: test_prefs_1
path: data/test_prefs_1-*
- split: train_prefs_2
path: data/train_prefs_2-*
- split: test_prefs_2
path: data/test_prefs_2-*
---
# Dataset Card for "0.0_ablation_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_circulus__Llama-2-13b-orca-v1 | ---
pretty_name: Evaluation run of circulus/Llama-2-13b-orca-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [circulus/Llama-2-13b-orca-v1](https://huggingface.co/circulus/Llama-2-13b-orca-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_circulus__Llama-2-13b-orca-v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-17T12:50:57.881579](https://huggingface.co/datasets/open-llm-leaderboard/details_circulus__Llama-2-13b-orca-v1/blob/main/results_2023-09-17T12-50-57.881579.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.1529991610738255,\n\
\ \"em_stderr\": 0.0036866006582882706,\n \"f1\": 0.2246581375838923,\n\
\ \"f1_stderr\": 0.003770616290655452,\n \"acc\": 0.44842066021890015,\n\
\ \"acc_stderr\": 0.010546865226614108\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.1529991610738255,\n \"em_stderr\": 0.0036866006582882706,\n\
\ \"f1\": 0.2246581375838923,\n \"f1_stderr\": 0.003770616290655452\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1288855193328279,\n \
\ \"acc_stderr\": 0.009229580761400274\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7679558011049724,\n \"acc_stderr\": 0.011864149691827943\n\
\ }\n}\n```"
repo_url: https://huggingface.co/circulus/Llama-2-13b-orca-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|arc:challenge|25_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T07_53_18.900339
path:
- '**/details_harness|drop|3_2023-09-17T07-53-18.900339.parquet'
- split: 2023_09_17T12_50_57.881579
path:
- '**/details_harness|drop|3_2023-09-17T12-50-57.881579.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-17T12-50-57.881579.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T07_53_18.900339
path:
- '**/details_harness|gsm8k|5_2023-09-17T07-53-18.900339.parquet'
- split: 2023_09_17T12_50_57.881579
path:
- '**/details_harness|gsm8k|5_2023-09-17T12-50-57.881579.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-17T12-50-57.881579.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hellaswag|10_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T08:46:04.009114.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T08:46:04.009114.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-18T08:46:04.009114.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T07_53_18.900339
path:
- '**/details_harness|winogrande|5_2023-09-17T07-53-18.900339.parquet'
- split: 2023_09_17T12_50_57.881579
path:
- '**/details_harness|winogrande|5_2023-09-17T12-50-57.881579.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-17T12-50-57.881579.parquet'
- config_name: results
data_files:
- split: 2023_08_18T08_46_04.009114
path:
- results_2023-08-18T08:46:04.009114.parquet
- split: 2023_09_17T07_53_18.900339
path:
- results_2023-09-17T07-53-18.900339.parquet
- split: 2023_09_17T12_50_57.881579
path:
- results_2023-09-17T12-50-57.881579.parquet
- split: latest
path:
- results_2023-09-17T12-50-57.881579.parquet
---
# Dataset Card for Evaluation run of circulus/Llama-2-13b-orca-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/circulus/Llama-2-13b-orca-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [circulus/Llama-2-13b-orca-v1](https://huggingface.co/circulus/Llama-2-13b-orca-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_circulus__Llama-2-13b-orca-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T12:50:57.881579](https://huggingface.co/datasets/open-llm-leaderboard/details_circulus__Llama-2-13b-orca-v1/blob/main/results_2023-09-17T12-50-57.881579.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.1529991610738255,
"em_stderr": 0.0036866006582882706,
"f1": 0.2246581375838923,
"f1_stderr": 0.003770616290655452,
"acc": 0.44842066021890015,
"acc_stderr": 0.010546865226614108
},
"harness|drop|3": {
"em": 0.1529991610738255,
"em_stderr": 0.0036866006582882706,
"f1": 0.2246581375838923,
"f1_stderr": 0.003770616290655452
},
"harness|gsm8k|5": {
"acc": 0.1288855193328279,
"acc_stderr": 0.009229580761400274
},
"harness|winogrande|5": {
"acc": 0.7679558011049724,
"acc_stderr": 0.011864149691827943
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
yhavinga/mc4_nl_cleaned | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- nl
- en
license:
- odc-by
multilinguality:
- monolingual
- en-nl
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- extended
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: mc4
pretty_name: mC4_nl_cleaned
---
# Dataset Card for Clean Dutch mC4
## Table of Contents
- [Dataset Card for Clean](#dataset-card-for-mc4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Preprocessing](#preprocessing)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).
While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below.
Based on the [Common Crawl dataset](https://commoncrawl.org).
The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4).
### Preprocessing
The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.
See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details.
In summary, the preprocessing procedure includes:
- Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words).
- Removing sentences containing:
- Less than 3 words.
- A word longer than 250 characters.
- An end symbol not matching end-of-sentence punctuation.
- Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English.
- Removing documents (after sentence filtering):
- Containing less than 5 sentences.
- Containing less than 500 or more than 50'000 characters.
- Not identified as prevalently Dutch by the `LangDetect` package.
Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch
shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence
tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure.
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'timestamp': '2019-02-22T15:37:25Z',
'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan',
'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.'
}
```
### Data Fields
The data contains the following fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp of extraction as a string
### Data Configs
To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages.
For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following
the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the
naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.
For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)
| config | train size (docs, words, download + preproc disk space) | validation size |
|:-------|--------------------------------------------------------:|----------------:|
| micro | 125k docs, 23M words (<1GB) | 16k docs |
| tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs |
| small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs |
| medium | 31M docs, 12B words (28 GB + 72 GB) | 32k docs |
| large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs |
| full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs |
For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned
`en` variant of C4.
You can load any config like this:
```python
from datasets import load_dataset
datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True)
print(datasets)
```
This will print
```
DatasetDict({
train: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 6303893
})
validation: Dataset({
features: ['text', 'timestamp', 'url'],
num_rows: 16189
})
})
```
Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:
```python
from datasets import load_dataset
mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True)
print(next(iter(mc4_nl_full_stream))) # Prints the example presented above
```
## Dataset Creation
Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`.
## Considerations for Using the Data
### Social Impact of Dataset
With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.
The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity.
Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.
This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.
### Discussion of Biases
Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will
inevitably reflect biases present in blog articles and comments on the Internet.
This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.
## Additional Information
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for
providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
|
open-llm-leaderboard/details_nextai-team__Moe-3x7b-QA-Code-Inst | ---
pretty_name: Evaluation run of nextai-team/Moe-3x7b-QA-Code-Inst
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [nextai-team/Moe-3x7b-QA-Code-Inst](https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nextai-team__Moe-3x7b-QA-Code-Inst\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T21:56:11.146279](https://huggingface.co/datasets/open-llm-leaderboard/details_nextai-team__Moe-3x7b-QA-Code-Inst/blob/main/results_2024-02-09T21-56-11.146279.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.621923173638031,\n\
\ \"acc_stderr\": 0.03295469362414083,\n \"acc_norm\": 0.6256892076354362,\n\
\ \"acc_norm_stderr\": 0.033614114766574776,\n \"mc1\": 0.4675642594859241,\n\
\ \"mc1_stderr\": 0.01746663214957761,\n \"mc2\": 0.6314885689047778,\n\
\ \"mc2_stderr\": 0.015532965157473447\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5930034129692833,\n \"acc_stderr\": 0.014356399418009116,\n\
\ \"acc_norm\": 0.6424914675767918,\n \"acc_norm_stderr\": 0.014005494275916576\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6569408484365664,\n\
\ \"acc_stderr\": 0.0047376083401634034,\n \"acc_norm\": 0.8460466042620992,\n\
\ \"acc_norm_stderr\": 0.00360166483871893\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n\
\ \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n\
\ \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316092,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316092\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n\
\ \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n\
\ \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n\
\ \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108102,\n\
\ \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108102\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n\
\ \"acc_stderr\": 0.04630653203366596,\n \"acc_norm\": 0.41228070175438597,\n\
\ \"acc_norm_stderr\": 0.04630653203366596\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04082482904638628,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04082482904638628\n },\n\
\ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n\
\ \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n\
\ \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\"\
: {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377563,\n\
\ \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377563\n\
\ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-high_school_biology|5\"\
: {\n \"acc\": 0.6870967741935484,\n \"acc_stderr\": 0.02637756702864586,\n\
\ \"acc_norm\": 0.6870967741935484,\n \"acc_norm_stderr\": 0.02637756702864586\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"\
acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7777777777777778,\n \"acc_stderr\": 0.029620227874790482,\n \"\
acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.029620227874790482\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306433,\n\
\ \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306433\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.024838811988033165,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.024838811988033165\n },\n \"harness|hendrycksTest-high_school_mathematics|5\"\
: {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n\
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6050420168067226,\n \"acc_stderr\": 0.03175367846096625,\n \
\ \"acc_norm\": 0.6050420168067226,\n \"acc_norm_stderr\": 0.03175367846096625\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658751,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658751\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8165137614678899,\n \"acc_stderr\": 0.0165952597103993,\n \"acc_norm\"\
: 0.8165137614678899,\n \"acc_norm_stderr\": 0.0165952597103993\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.034099716973523674,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.034099716973523674\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.029331162294251735,\n\
\ \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.029331162294251735\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \
\ \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6322869955156951,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.6322869955156951,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8347107438016529,\n \"acc_stderr\": 0.03390780612972776,\n \"\
acc_norm\": 0.8347107438016529,\n \"acc_norm_stderr\": 0.03390780612972776\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n\
\ \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.04453254836326466,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.04453254836326466\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507332,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507332\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8045977011494253,\n\
\ \"acc_stderr\": 0.014179171373424383,\n \"acc_norm\": 0.8045977011494253,\n\
\ \"acc_norm_stderr\": 0.014179171373424383\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388677003,\n\
\ \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388677003\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4,\n\
\ \"acc_stderr\": 0.01638463841038082,\n \"acc_norm\": 0.4,\n \
\ \"acc_norm_stderr\": 0.01638463841038082\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.02625605383571896,\n\
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.02625605383571896\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.025670259242188936,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.025670259242188936\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495036,\n\
\ \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495036\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44784876140808344,\n\
\ \"acc_stderr\": 0.012700582404768224,\n \"acc_norm\": 0.44784876140808344,\n\
\ \"acc_norm_stderr\": 0.012700582404768224\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.02858270975389844,\n\
\ \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.02858270975389844\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6584967320261438,\n \"acc_stderr\": 0.019184639328092484,\n \
\ \"acc_norm\": 0.6584967320261438,\n \"acc_norm_stderr\": 0.019184639328092484\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.029504896454595964,\n\
\ \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.029504896454595964\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7014925373134329,\n\
\ \"acc_stderr\": 0.032357437893550424,\n \"acc_norm\": 0.7014925373134329,\n\
\ \"acc_norm_stderr\": 0.032357437893550424\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \
\ \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4675642594859241,\n\
\ \"mc1_stderr\": 0.01746663214957761,\n \"mc2\": 0.6314885689047778,\n\
\ \"mc2_stderr\": 0.015532965157473447\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.011749626260902549\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.48597422289613346,\n \
\ \"acc_stderr\": 0.01376706494023929\n }\n}\n```"
repo_url: https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|arc:challenge|25_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|gsm8k|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hellaswag|10_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T21-56-11.146279.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T21-56-11.146279.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- '**/details_harness|winogrande|5_2024-02-09T21-56-11.146279.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T21-56-11.146279.parquet'
- config_name: results
data_files:
- split: 2024_02_09T21_56_11.146279
path:
- results_2024-02-09T21-56-11.146279.parquet
- split: latest
path:
- results_2024-02-09T21-56-11.146279.parquet
---
# Dataset Card for Evaluation run of nextai-team/Moe-3x7b-QA-Code-Inst
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [nextai-team/Moe-3x7b-QA-Code-Inst](https://huggingface.co/nextai-team/Moe-3x7b-QA-Code-Inst) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_nextai-team__Moe-3x7b-QA-Code-Inst",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T21:56:11.146279](https://huggingface.co/datasets/open-llm-leaderboard/details_nextai-team__Moe-3x7b-QA-Code-Inst/blob/main/results_2024-02-09T21-56-11.146279.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.621923173638031,
"acc_stderr": 0.03295469362414083,
"acc_norm": 0.6256892076354362,
"acc_norm_stderr": 0.033614114766574776,
"mc1": 0.4675642594859241,
"mc1_stderr": 0.01746663214957761,
"mc2": 0.6314885689047778,
"mc2_stderr": 0.015532965157473447
},
"harness|arc:challenge|25": {
"acc": 0.5930034129692833,
"acc_stderr": 0.014356399418009116,
"acc_norm": 0.6424914675767918,
"acc_norm_stderr": 0.014005494275916576
},
"harness|hellaswag|10": {
"acc": 0.6569408484365664,
"acc_stderr": 0.0047376083401634034,
"acc_norm": 0.8460466042620992,
"acc_norm_stderr": 0.00360166483871893
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316092,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316092
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6679245283018868,
"acc_stderr": 0.02898545565233439,
"acc_norm": 0.6679245283018868,
"acc_norm_stderr": 0.02898545565233439
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108102,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108102
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.04630653203366596,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.04630653203366596
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6,
"acc_stderr": 0.04082482904638628,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04082482904638628
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377563,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377563
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6870967741935484,
"acc_stderr": 0.02637756702864586,
"acc_norm": 0.6870967741935484,
"acc_norm_stderr": 0.02637756702864586
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.029620227874790482,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.029620227874790482
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8549222797927462,
"acc_stderr": 0.025416343096306433,
"acc_norm": 0.8549222797927462,
"acc_norm_stderr": 0.025416343096306433
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6,
"acc_stderr": 0.024838811988033165,
"acc_norm": 0.6,
"acc_norm_stderr": 0.024838811988033165
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6050420168067226,
"acc_stderr": 0.03175367846096625,
"acc_norm": 0.6050420168067226,
"acc_norm_stderr": 0.03175367846096625
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658751,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658751
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8165137614678899,
"acc_stderr": 0.0165952597103993,
"acc_norm": 0.8165137614678899,
"acc_norm_stderr": 0.0165952597103993
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5,
"acc_stderr": 0.034099716973523674,
"acc_norm": 0.5,
"acc_norm_stderr": 0.034099716973523674
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.029331162294251735,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.029331162294251735
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6322869955156951,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.6322869955156951,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8347107438016529,
"acc_stderr": 0.03390780612972776,
"acc_norm": 0.8347107438016529,
"acc_norm_stderr": 0.03390780612972776
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6993865030674846,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.6993865030674846,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.04453254836326466,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.04453254836326466
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507332,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507332
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8045977011494253,
"acc_stderr": 0.014179171373424383,
"acc_norm": 0.8045977011494253,
"acc_norm_stderr": 0.014179171373424383
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388677003,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388677003
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4,
"acc_stderr": 0.01638463841038082,
"acc_norm": 0.4,
"acc_norm_stderr": 0.01638463841038082
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.025670259242188936,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.025670259242188936
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495036,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495036
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44784876140808344,
"acc_stderr": 0.012700582404768224,
"acc_norm": 0.44784876140808344,
"acc_norm_stderr": 0.012700582404768224
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.02858270975389844,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.02858270975389844
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6584967320261438,
"acc_stderr": 0.019184639328092484,
"acc_norm": 0.6584967320261438,
"acc_norm_stderr": 0.019184639328092484
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6938775510204082,
"acc_stderr": 0.029504896454595964,
"acc_norm": 0.6938775510204082,
"acc_norm_stderr": 0.029504896454595964
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7014925373134329,
"acc_stderr": 0.032357437893550424,
"acc_norm": 0.7014925373134329,
"acc_norm_stderr": 0.032357437893550424
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4675642594859241,
"mc1_stderr": 0.01746663214957761,
"mc2": 0.6314885689047778,
"mc2_stderr": 0.015532965157473447
},
"harness|winogrande|5": {
"acc": 0.7742699289660616,
"acc_stderr": 0.011749626260902549
},
"harness|gsm8k|5": {
"acc": 0.48597422289613346,
"acc_stderr": 0.01376706494023929
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ihaflix1/pauloflores | ---
license: openrail
---
|
vwxyzjn/openhermes-dev__mistralai_Mistral-7B-Instruct-v0.1__1707330973 | ---
dataset_info:
features:
- name: model
dtype: 'null'
- name: category
dtype: string
- name: language
dtype: string
- name: custom_instruction
dtype: bool
- name: id
dtype: string
- name: topic
dtype: string
- name: avatarUrl
dtype: 'null'
- name: idx
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: 'null'
- name: system_prompt
dtype: string
- name: source
dtype: string
- name: model_name
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: title
dtype: string
- name: hash
dtype: 'null'
- name: views
dtype: 'null'
- name: prompt
dtype: string
- name: token_length
dtype: int64
- name: candidate0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: candidate1
list:
- name: content
dtype: string
- name: role
dtype: string
- name: candidate0_policy
dtype: string
- name: candidate1_policy
dtype: string
- name: candidate0_score
dtype: float64
- name: candidate1_score
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_policy
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_policy
dtype: string
splits:
- name: train_prefs
num_bytes: 2293819
num_examples: 167
download_size: 1357543
dataset_size: 2293819
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
|
polinaeterna/doc-image-7 | ---
configs:
- config_name: default
drop_labels: false
drop_metadata: false
size_categories:
- n<1K
---
# [doc] image dataset 7
This dataset contains 2 jpeg files in the `red` directory and 2 jpeg files in the `green` directory. |
allenai/scitldr | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: scitldr
pretty_name: SciTLDR
tags:
- scientific-documents-summarization
dataset_info:
- config_name: Abstract
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': non-oracle
'1': oracle
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: target
sequence: string
splits:
- name: train
num_bytes: 2738065
num_examples: 1992
- name: test
num_bytes: 1073656
num_examples: 618
- name: validation
num_bytes: 994876
num_examples: 619
download_size: 5483987
dataset_size: 4806597
- config_name: AIC
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': 0
'1': 1
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: ic
dtype: bool_
- name: target
sequence: string
splits:
- name: train
num_bytes: 14473822
num_examples: 1992
- name: test
num_bytes: 4822026
num_examples: 618
- name: validation
num_bytes: 4476237
num_examples: 619
download_size: 25545108
dataset_size: 23772085
- config_name: FullText
features:
- name: source
sequence: string
- name: source_labels
sequence:
class_label:
names:
'0': non-oracle
'1': oracle
- name: rouge_scores
sequence: float32
- name: paper_id
dtype: string
- name: target
sequence: string
splits:
- name: train
num_bytes: 66917363
num_examples: 1992
- name: test
num_bytes: 20182554
num_examples: 618
- name: validation
num_bytes: 18790651
num_examples: 619
download_size: 110904552
dataset_size: 105890568
---
# Dataset Card for SciTLDR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/allenai/scitldr
- **Repository:** https://github.com/allenai/scitldr
- **Paper:** https://arxiv.org/abs/2004.15011
- **Leaderboard:**
- **Point of Contact:** {isabelc,kylel,armanc,danw}@allenai.org
### Dataset Summary
`SciTLDR`: Extreme Summarization of Scientific Documents
SciTLDR is a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SciTLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden.
### Supported Tasks and Leaderboards
summarization
### Languages
English
## Dataset Structure
SciTLDR is split in to a 60/20/20 train/dev/test split. For each file, each line is a json, formatted as follows
```
{
"source":[
"sent0",
"sent1",
"sent2",
...
],
"source_labels":[binary list in which 1 is the oracle sentence],
"rouge_scores":[precomputed rouge-1 scores],
"paper_id":"PAPER-ID",
"target":[
"author-tldr",
"pr-tldr0",
"pr-tldr1",
...
],
"title":"TITLE"
}
```
The keys `rouge_scores` and `source_labels` are not necessary for any code to run, precomputed Rouge scores are provided for future research.
### Data Instances
{
"source": [
"Mixed precision training (MPT) is becoming a practical technique to improve the speed and energy efficiency of training deep neural networks by leveraging the fast hardware support for IEEE half-precision floating point that is available in existing GPUs.",
"MPT is typically used in combination with a technique called loss scaling, that works by scaling up the loss value up before the start of backpropagation in order to minimize the impact of numerical underflow on training.",
"Unfortunately, existing methods make this loss scale value a hyperparameter that needs to be tuned per-model, and a single scale cannot be adapted to different layers at different training stages.",
"We introduce a loss scaling-based training method called adaptive loss scaling that makes MPT easier and more practical to use, by removing the need to tune a model-specific loss scale hyperparameter.",
"We achieve this by introducing layer-wise loss scale values which are automatically computed during training to deal with underflow more effectively than existing methods.",
"We present experimental results on a variety of networks and tasks that show our approach can shorten the time to convergence and improve accuracy, compared with using the existing state-of-the-art MPT and single-precision floating point."
],
"source_labels": [
0,
0,
0,
1,
0,
0
],
"rouge_scores": [
0.2399999958000001,
0.26086956082230633,
0.19999999531250012,
0.38095237636054424,
0.2051282003944774,
0.2978723360796741
],
"paper_id": "rJlnfaNYvB",
"target": [
"We devise adaptive loss scaling to improve mixed precision training that surpass the state-of-the-art results.",
"Proposal for an adaptive loss scaling method during backpropagation for mix precision training where scale rate is decided automatically to reduce the underflow.",
"The authors propose a method to train models in FP16 precision that adopts a more elaborate way to minimize underflow in every layer simultaneously and automatically."
],
"title": "Adaptive Loss Scaling for Mixed Precision Training"
}
### Data Fields
- `source`: The Abstract, Introduction and Conclusion (AIC) or Full text of the paper, with one sentence per line.
- `source_labels`: Binary 0 or 1, 1 denotes the oracle sentence.
- `rouge_scores`: Precomputed ROUGE baseline scores for each sentence.
- `paper_id`: Arxiv Paper ID.
- `target`: Multiple summaries for each sentence, one sentence per line.
- `title`: Title of the paper.
### Data Splits
| | train | valid | test |
|-------------------|-------|--------|------|
| SciTLDR-A | 1992 | 618 | 619 |
| SciTLDR-AIC | 1992 | 618 | 619 |
| SciTLDR-FullText | 1992 | 618 | 619 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
https://allenai.org/
### Annotations
#### Annotation process
Given the title and first 128 words of a reviewer comment about a paper,
re-write the summary (if it exists) into a single sentence or an incomplete
phrase. Summaries must be no more than one sentence.
Most summaries are between 15 and 25 words. The average rewritten summary is
20 words long.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
To encourage further research in the area of extreme summarization of scientific documents.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Apache License 2.0
### Citation Information
@article{cachola2020tldr,
title={{TLDR}: Extreme Summarization of Scientific Documents},
author={Isabel Cachola and Kyle Lo and Arman Cohan and Daniel S. Weld},
journal={arXiv:2004.15011},
year={2020},
}
### Contributions
Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset. |
CyberHarem/crescent_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of crescent/クレセント/新月 (Azur Lane)
This is the dataset of crescent/クレセント/新月 (Azur Lane), containing 60 images and their tags.
The core tags of this character are `blonde_hair, long_hair, red_eyes, braid, hat, ahoge, breasts, ribbon, hair_ornament, beret, two_side_up, bangs, hair_ribbon, hair_between_eyes, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 60 | 73.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/crescent_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 60 | 45.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/crescent_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 153 | 101.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/crescent_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 60 | 66.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/crescent_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 153 | 137.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/crescent_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/crescent_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, open_mouth, apron, white_panties, ass, bow, pillow, small_breasts |
| 1 | 17 |  |  |  |  |  | blush, 1girl, solo, white_shirt, looking_at_viewer, juliet_sleeves, plaid_skirt, retrofit_(azur_lane), white_headwear, choker, plaid_ribbon, rabbit_hair_ornament, white_background, ascot, collared_shirt, crescent, jewelry, simple_background, closed_mouth, open_mouth, v-shaped_eyebrows |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | looking_at_viewer | solo | open_mouth | apron | white_panties | ass | bow | pillow | small_breasts | white_shirt | juliet_sleeves | plaid_skirt | retrofit_(azur_lane) | white_headwear | choker | plaid_ribbon | rabbit_hair_ornament | white_background | ascot | collared_shirt | crescent | jewelry | simple_background | closed_mouth | v-shaped_eyebrows |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------------------|:-------|:-------------|:--------|:----------------|:------|:------|:---------|:----------------|:--------------|:-----------------|:--------------|:-----------------------|:-----------------|:---------|:---------------|:-----------------------|:-------------------|:--------|:-----------------|:-----------|:----------|:--------------------|:---------------|:--------------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 17 |  |  |  |  |  | X | X | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
iamnguyen/edu_parent_01 | ---
dataset_info:
features:
- name: content
dtype: string
- name: metadata
struct:
- name: answer
dtype: string
- name: id
dtype: string
- name: prefix
dtype: string
- name: question
dtype: string
- name: school_id
dtype: string
- name: seq_num
dtype: int64
- name: source
dtype: string
- name: tokenized_question
dtype: string
- name: url
dtype: string
- name: vector
sequence: float64
splits:
- name: train
num_bytes: 2931135
num_examples: 341
download_size: 2188765
dataset_size: 2931135
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/code_instructions_standardized_cluster_0_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 19019275
num_examples: 14128
download_size: 9611146
dataset_size: 19019275
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_standardized_cluster_0_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
leeywin/mini-platypus-two | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245925
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahishamm/isic_masks | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 27417118.25
num_examples: 2750
download_size: 27376536
dataset_size: 27417118.25
---
# Dataset Card for "isic_masks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
conghao/gpt3.5-jinyong-style | ---
task_categories:
- question-answering
language:
- zh
tags:
- medical
size_categories:
- 1K<n<10K
---
### OpenAI微调医学数据集
开源数据集存在与预训练模型数据集相似度较高的问题,导致微调结果不明显,效果不突出,因此就需要构建某种风格的专属数据集。
因此,构建了带**金庸创作风格**的医学数据集,可直接用于openai的微调使用。
微调实验结果证明,当输入医学问题时,微调模型回答的风格为**金庸创作风格**回答。
### Question
```
[{'role': 'system', 'content': '你是我的私人医生助手,你要回答我的健康问题。'}, {'role': 'user', 'content': '我睡眠不足,很头痛'}]
```
### Answer
```
少侠保重身体。江湖人称“睡眠不足,头痛难受”,此乃常见之症。吾乃医术高强之侠士,愿为你指点迷津。睡眠不足,多因忧虑纷扰、劳累过度所致。当以调整生活作息,保持规律之睡眠时间为要。且可饮用一些花草茶,如菊花、薰衣草,以宁心安神。如症状持续,还需请教名医,以寻得更为妥善之疗法。
```
可以发现,模型的回答已经学习到**金庸创作风格**的能力。 |
BadreddineHug/1s_librispeech_subset | ---
dataset_info:
features:
- name: file
dtype: 'null'
- name: audio
dtype: 'null'
- name: text
dtype: 'null'
- name: speaker_id
dtype: 'null'
- name: chapter_id
dtype: 'null'
- name: id
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1483
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juancopi81/orca-math-word-problems-10002_20004 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 10124159
num_examples: 10002
download_size: 3449907
dataset_size: 10124159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_automerger__YamShadow-7B | ---
pretty_name: Evaluation run of automerger/YamShadow-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_automerger__YamShadow-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-13T05:53:53.764293](https://huggingface.co/datasets/open-llm-leaderboard/details_automerger__YamShadow-7B/blob/main/results_2024-03-13T05-53-53.764293.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6527541198598119,\n\
\ \"acc_stderr\": 0.032147666943338864,\n \"acc_norm\": 0.6518777917474098,\n\
\ \"acc_norm_stderr\": 0.032826107851510807,\n \"mc1\": 0.6266829865361077,\n\
\ \"mc1_stderr\": 0.016932370557570638,\n \"mc2\": 0.7835065690234105,\n\
\ \"mc2_stderr\": 0.013602365203156455\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7073378839590444,\n \"acc_stderr\": 0.013295916103619422,\n\
\ \"acc_norm\": 0.7252559726962458,\n \"acc_norm_stderr\": 0.013044617212771227\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7120095598486357,\n\
\ \"acc_stderr\": 0.0045190116884171625,\n \"acc_norm\": 0.8889663413662617,\n\
\ \"acc_norm_stderr\": 0.0031353173122281226\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n\
\ \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \
\ \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106136,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106136\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5617021276595745,\n \"acc_stderr\": 0.03243618636108101,\n\
\ \"acc_norm\": 0.5617021276595745,\n \"acc_norm_stderr\": 0.03243618636108101\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n\
\ \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n\
\ \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42063492063492064,\n \"acc_stderr\": 0.025424835086923996,\n \"\
acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086923996\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n\
\ \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n\
\ \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7806451612903226,\n \"acc_stderr\": 0.023540799358723295,\n \"\
acc_norm\": 0.7806451612903226,\n \"acc_norm_stderr\": 0.023540799358723295\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175007,\n \"\
acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175007\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768763,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768763\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n\
\ \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \
\ \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.03068473711513537,\n \
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.03068473711513537\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3973509933774834,\n \"acc_stderr\": 0.039955240076816806,\n \"\
acc_norm\": 0.3973509933774834,\n \"acc_norm_stderr\": 0.039955240076816806\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8403669724770643,\n \"acc_stderr\": 0.01570349834846177,\n \"\
acc_norm\": 0.8403669724770643,\n \"acc_norm_stderr\": 0.01570349834846177\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.02584501798692692,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.02584501798692692\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n\
\ \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n\
\ \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n\
\ \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.41964285714285715,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.021901905115073325,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.021901905115073325\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.01354741565866226,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.01354741565866226\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323385,\n\
\ \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323385\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.441340782122905,\n\
\ \"acc_stderr\": 0.016607021781050873,\n \"acc_norm\": 0.441340782122905,\n\
\ \"acc_norm_stderr\": 0.016607021781050873\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818737,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818737\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n\
\ \"acc_stderr\": 0.026236965881153266,\n \"acc_norm\": 0.6913183279742765,\n\
\ \"acc_norm_stderr\": 0.026236965881153266\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886335,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886335\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.5070921985815603,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.5070921985815603,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4745762711864407,\n\
\ \"acc_stderr\": 0.012753716929101004,\n \"acc_norm\": 0.4745762711864407,\n\
\ \"acc_norm_stderr\": 0.012753716929101004\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \
\ \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.6266829865361077,\n\
\ \"mc1_stderr\": 0.016932370557570638,\n \"mc2\": 0.7835065690234105,\n\
\ \"mc2_stderr\": 0.013602365203156455\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8500394632991318,\n \"acc_stderr\": 0.010034394804580809\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7020470053070508,\n \
\ \"acc_stderr\": 0.012597932232914517\n }\n}\n```"
repo_url: https://huggingface.co/automerger/YamShadow-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|arc:challenge|25_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|gsm8k|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hellaswag|10_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-13T05-53-53.764293.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-13T05-53-53.764293.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- '**/details_harness|winogrande|5_2024-03-13T05-53-53.764293.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-13T05-53-53.764293.parquet'
- config_name: results
data_files:
- split: 2024_03_13T05_53_53.764293
path:
- results_2024-03-13T05-53-53.764293.parquet
- split: latest
path:
- results_2024-03-13T05-53-53.764293.parquet
---
# Dataset Card for Evaluation run of automerger/YamShadow-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_automerger__YamShadow-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-13T05:53:53.764293](https://huggingface.co/datasets/open-llm-leaderboard/details_automerger__YamShadow-7B/blob/main/results_2024-03-13T05-53-53.764293.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6527541198598119,
"acc_stderr": 0.032147666943338864,
"acc_norm": 0.6518777917474098,
"acc_norm_stderr": 0.032826107851510807,
"mc1": 0.6266829865361077,
"mc1_stderr": 0.016932370557570638,
"mc2": 0.7835065690234105,
"mc2_stderr": 0.013602365203156455
},
"harness|arc:challenge|25": {
"acc": 0.7073378839590444,
"acc_stderr": 0.013295916103619422,
"acc_norm": 0.7252559726962458,
"acc_norm_stderr": 0.013044617212771227
},
"harness|hellaswag|10": {
"acc": 0.7120095598486357,
"acc_stderr": 0.0045190116884171625,
"acc_norm": 0.8889663413662617,
"acc_norm_stderr": 0.0031353173122281226
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106136,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106136
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5617021276595745,
"acc_stderr": 0.03243618636108101,
"acc_norm": 0.5617021276595745,
"acc_norm_stderr": 0.03243618636108101
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086923996,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086923996
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7806451612903226,
"acc_stderr": 0.023540799358723295,
"acc_norm": 0.7806451612903226,
"acc_norm_stderr": 0.023540799358723295
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768763,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768763
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563976,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563976
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.02840653309060846,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.02840653309060846
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.03068473711513537,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.03068473711513537
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.039955240076816806,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.039955240076816806
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8403669724770643,
"acc_stderr": 0.01570349834846177,
"acc_norm": 0.8403669724770643,
"acc_norm_stderr": 0.01570349834846177
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.02584501798692692,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.02584501798692692
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.02553010046023349,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.02553010046023349
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477515,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477515
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.021901905115073325,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.021901905115073325
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.01354741565866226,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.01354741565866226
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7167630057803468,
"acc_stderr": 0.024257901705323385,
"acc_norm": 0.7167630057803468,
"acc_norm_stderr": 0.024257901705323385
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.441340782122905,
"acc_stderr": 0.016607021781050873,
"acc_norm": 0.441340782122905,
"acc_norm_stderr": 0.016607021781050873
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818737,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818737
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153266,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153266
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886335,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886335
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5070921985815603,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.5070921985815603,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4745762711864407,
"acc_stderr": 0.012753716929101004,
"acc_norm": 0.4745762711864407,
"acc_norm_stderr": 0.012753716929101004
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093092,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093092
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699121,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699121
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.6266829865361077,
"mc1_stderr": 0.016932370557570638,
"mc2": 0.7835065690234105,
"mc2_stderr": 0.013602365203156455
},
"harness|winogrande|5": {
"acc": 0.8500394632991318,
"acc_stderr": 0.010034394804580809
},
"harness|gsm8k|5": {
"acc": 0.7020470053070508,
"acc_stderr": 0.012597932232914517
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/iori_minase_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of iori_minase/水瀬伊織/水濑伊织 (Azur Lane)
This is the dataset of iori_minase/水瀬伊織/水濑伊织 (Azur Lane), containing 500 images and their tags.
The core tags of this character are `long_hair, brown_hair, hairband, brown_eyes, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 512.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iori_minase_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 339.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iori_minase_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1141 | 676.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iori_minase_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 467.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iori_minase_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1141 | 885.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/iori_minase_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/iori_minase_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, dress, solo, stuffed_animal, stuffed_bunny, rabbit, blush, bow |
| 1 | 7 |  |  |  |  |  | 1girl, solo, stuffed_animal, stuffed_bunny, dress, smile, one_eye_closed |
| 2 | 8 |  |  |  |  |  | 1girl, dress, solo, black_thighhighs, blush, zettai_ryouiki, bow, smile |
| 3 | 8 |  |  |  |  |  | 1girl, bracelet, solo, dress, blush, looking_at_viewer, smile, bare_shoulders, open_mouth, simple_background, white_background |
| 4 | 15 |  |  |  |  |  | 1girl, necklace, solo, beret, dress, thighhighs, belt, smile, earrings, wrist_cuffs, bare_shoulders, one_eye_closed, open_mouth |
| 5 | 6 |  |  |  |  |  | 1girl, solo, looking_at_viewer, sailor_bikini, white_bikini, blush, navel, sitting, bow, breasts, open_mouth, simple_background, smile, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | solo | stuffed_animal | stuffed_bunny | rabbit | blush | bow | smile | one_eye_closed | black_thighhighs | zettai_ryouiki | bracelet | looking_at_viewer | bare_shoulders | open_mouth | simple_background | white_background | necklace | beret | thighhighs | belt | earrings | wrist_cuffs | sailor_bikini | white_bikini | navel | sitting | breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-----------------|:----------------|:---------|:--------|:------|:--------|:-----------------|:-------------------|:-----------------|:-----------|:--------------------|:-----------------|:-------------|:--------------------|:-------------------|:-----------|:--------|:-------------|:-------|:-----------|:--------------|:----------------|:---------------|:--------|:----------|:----------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | | | | X | X | X | | X | X | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | X | X | | | | X | | X | | | | X | X | X | X | X | X | | | | | | | | | | | |
| 4 | 15 |  |  |  |  |  | X | X | X | | | | | | X | X | | | | | X | X | | | X | X | X | X | X | X | | | | | |
| 5 | 6 |  |  |  |  |  | X | | X | | | | X | X | X | | | | | X | | X | X | X | | | | | | | X | X | X | X | X |
|
CyberHarem/sakura_akino_areyoutheonlyonewholovesme | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Sakura Akino/秋野桜 (Are you the only one who loves me?)
This is the dataset of Sakura Akino/秋野桜 (Are you the only one who loves me?), containing 328 images and their tags.
The core tags of this character are `long_hair, hair_ornament, hair_flower, purple_hair, hairband, purple_eyes, black_hairband, pink_hair, bow, mole_under_eye, mole`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 328 | 215.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakura_akino_areyoutheonlyonewholovesme/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 328 | 215.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakura_akino_areyoutheonlyonewholovesme/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 614 | 370.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakura_akino_areyoutheonlyonewholovesme/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakura_akino_areyoutheonlyonewholovesme',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, closed_mouth, indoors, serafuku, solo, upper_body, white_shirt, collarbone, pink_flower, sidelocks, blue_bow, blue_sailor_collar |
| 1 | 5 |  |  |  |  |  | 1girl, indoors, sailor_collar, serafuku, sidelocks, solo, sweatdrop, white_shirt, blush, closed_mouth, pink_flower, portrait, collarbone, pink_eyes |
| 2 | 7 |  |  |  |  |  | 1girl, flower, serafuku, solo, upper_body, indoors, looking_at_viewer, smile, bookshelf, blush, holding_book, pink_eyes, sailor_collar, short_sleeves |
| 3 | 5 |  |  |  |  |  | 1girl, closed_mouth, portrait, solo, looking_at_viewer, pink_flower, smile, blush |
| 4 | 9 |  |  |  |  |  | 1girl, pleated_skirt, serafuku, white_shirt, blue_skirt, holding_book, indoors, solo, standing, closed_mouth, long_sleeves, window, black_skirt, black_thighhighs, very_long_hair, blue_sailor_collar, pink_flower, zettai_ryouiki, from_side, profile, smile |
| 5 | 7 |  |  |  |  |  | 1girl, blue_sailor_collar, blue_skirt, indoors, long_sleeves, on_chair, pleated_skirt, serafuku, sitting, solo, white_shirt, blue_bow, bowtie, collarbone, desk, pink_flower, black_thighhighs, closed_mouth, feet_out_of_frame, holding, open_mouth, sidelocks, smartphone, tablet_pc, v-shaped_eyebrows |
| 6 | 6 |  |  |  |  |  | 1girl, flower, holding_box, indoors, long_sleeves, pleated_skirt, sailor_collar, serafuku, solo, v-shaped_eyebrows, carrying, open_mouth, white_shirt, black_skirt, arm_up, blue_skirt, sidelocks |
| 7 | 8 |  |  |  |  |  | 1girl, black_thighhighs, pleated_skirt, flower, serafuku, blue_skirt, solo, cherry_blossoms, open_mouth, looking_at_viewer, pink_eyes, zettai_ryouiki, long_sleeves, sailor_collar, shirt, smile |
| 8 | 8 |  |  |  |  |  | 2girls, flower, serafuku, short_sleeves, pleated_skirt, solo_focus, blue_skirt, brown_hair, day, holding_book, outdoors, tree, bookshelf, indoors, open_book, reading, sailor_collar |
| 9 | 9 |  |  |  |  |  | 1girl, closed_mouth, flower, looking_at_viewer, smile, solo, sidelocks, see-through_sleeves, short_sleeves, white_dress, collarbone, floral_print, upper_body, white_background |
| 10 | 7 |  |  |  |  |  | blue_sailor_collar, blue_skirt, index_finger_raised, long_sleeves, pleated_skirt, school_bag, serafuku, solo_focus, white_background, 1girl, bowtie, simple_background, aqua_ribbon, black_thighhighs, hair_intakes, hand_up, sidelocks, zettai_ryouiki, black_footwear, from_side, miniskirt, profile, white_shirt, flower, full_body, loafers |
| 11 | 11 |  |  |  |  |  | cleavage, 1girl, medium_breasts, ponytail, flower, purple_bikini, solo, sidelocks, collarbone, frilled_bikini, open_mouth, upper_body, indoors, navel, pink_bikini, smile, armpits, bare_shoulders, front-tie_top, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | closed_mouth | indoors | serafuku | solo | upper_body | white_shirt | collarbone | pink_flower | sidelocks | blue_bow | blue_sailor_collar | sailor_collar | sweatdrop | blush | portrait | pink_eyes | flower | looking_at_viewer | smile | bookshelf | holding_book | short_sleeves | pleated_skirt | blue_skirt | standing | long_sleeves | window | black_skirt | black_thighhighs | very_long_hair | zettai_ryouiki | from_side | profile | on_chair | sitting | bowtie | desk | feet_out_of_frame | holding | open_mouth | smartphone | tablet_pc | v-shaped_eyebrows | holding_box | carrying | arm_up | cherry_blossoms | shirt | 2girls | solo_focus | brown_hair | day | outdoors | tree | open_book | reading | see-through_sleeves | white_dress | floral_print | white_background | index_finger_raised | school_bag | simple_background | aqua_ribbon | hair_intakes | hand_up | black_footwear | miniskirt | full_body | loafers | cleavage | medium_breasts | ponytail | purple_bikini | frilled_bikini | navel | pink_bikini | armpits | bare_shoulders | front-tie_top |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:---------------|:----------|:-----------|:-------|:-------------|:--------------|:-------------|:--------------|:------------|:-----------|:---------------------|:----------------|:------------|:--------|:-----------|:------------|:---------|:--------------------|:--------|:------------|:---------------|:----------------|:----------------|:-------------|:-----------|:---------------|:---------|:--------------|:-------------------|:-----------------|:-----------------|:------------|:----------|:-----------|:----------|:---------|:-------|:--------------------|:----------|:-------------|:-------------|:------------|:--------------------|:--------------|:-----------|:---------|:------------------|:--------|:---------|:-------------|:-------------|:------|:-----------|:-------|:------------|:----------|:----------------------|:--------------|:---------------|:-------------------|:----------------------|:-------------|:--------------------|:--------------|:---------------|:----------|:-----------------|:------------|:------------|:----------|:-----------|:-----------------|:-----------|:----------------|:-----------------|:--------|:--------------|:----------|:-----------------|:----------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | | X | X | X | X | | | | | | | X | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | | | X | | | | X | | | | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | X | X | X | X | | X | | X | | | X | | | | | | | | X | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | X | X | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | | X | X | X | | X | | | X | | | X | | | | | X | | | | | | X | X | | X | | X | | | | | | | | | | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | X | X | | | | | | | | X | | | | X | X | X | X | | | | X | X | | X | | | X | | X | | | | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 8 |  |  |  |  |  | | | X | X | | | | | | | | | X | | | | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | X | | | X | X | | X | | X | | | | | | | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 10 | 7 |  |  |  |  |  | X | | | X | | | X | | | X | | X | | | | | | X | | | | | | X | X | | X | | | X | | X | X | X | | | X | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 11 | 11 |  |  |  |  |  | X | | X | | X | X | | X | | X | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
jmi6/hldatasetv2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2206466.0
num_examples: 56
download_size: 0
dataset_size: 2206466.0
---
# Dataset Card for "hldatasetv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-samsum-samsum-6999f5-3301091732 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: knkarthick/MEETING_SUMMARY
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: knkarthick/MEETING_SUMMARY
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@TheAlphaQ](https://huggingface.co/TheAlphaQ) for evaluating this model. |
tyzhu/find_first_sent_train_400_eval_40_baseline | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 680916
num_examples: 400
- name: validation
num_bytes: 70650
num_examples: 40
download_size: 498501
dataset_size: 751566
---
# Dataset Card for "find_first_sent_train_400_eval_40_baseline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PocketDoc/RUCAIBox-Story-Generation-Alpaca | ---
task_categories:
- text-generation
language:
- en
---
https://huggingface.co/datasets/RUCAIBox/Story-Generation
RUC AI Box HC Story Generation augmented and converted to alpaca format.
No filtering has been done. |
HuggingFaceM4/MM_VET_modif | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: question_id
dtype: string
splits:
- name: test
num_bytes: 266162086.0
num_examples: 218
download_size: 232971573
dataset_size: 266162086.0
---
# Dataset Card for "MM_VET_modif"
[MM-VET Benchmark](https://arxiv.org/pdf/2308.02490.pdf) |
TrainingDataPro/spine-segmentation-dataset | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
- object-detection
- image-segmentation
- image-to-image
language:
- en
tags:
- medical
- biology
- code
---
# Spine MRI Dataset, Anomaly Detection & Segmentation
The dataset consists of .dcm files containing **MRI scans of the spine** of the person with several dystrophic changes, such as osteophytes, dorsal disc extrusions, dorsal disc protrusions and spondyloarthrosis. The images are **labeled** by the doctors and accompanied by **report** in PDF-format.
The dataset includes 5 studies, made from the different angles which provide a comprehensive understanding of a several dystrophic changes and useful in training spine anomaly classification algorithms. Each scan includes detailed imaging of the spine, including the *vertebrae, discs, nerves, and surrounding tissues*.
### MRI study angles in the dataset

# 💴 For Commercial Usage: Full version of the dataset includes 20,000 spine studies of people with different conditions, leave a request on **[TrainingData](https://trainingdata.pro/data-market/spine-mri?utm_source=huggingface&utm_medium=cpc&utm_campaign=spine-segmentation-dataset)** to buy the dataset
### Types of diseases and conditions in the full dataset:
- Degeneration of discs
- Osteophytes
- Osteochondrosis
- Hemangioma
- Disk extrusion
- Spondylitis
- **AND MANY OTHER CONDITIONS**

Researchers and healthcare professionals can use this dataset to study spinal conditions and disorders, such as herniated discs, spinal stenosis, scoliosis, and fractures. The dataset can also be used to develop and evaluate new imaging techniques, computer algorithms for image analysis, and artificial intelligence models for automated diagnosis.
# 💴 Buy the Dataset: This is just an example of the data. Leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/spine-mri?utm_source=huggingface&utm_medium=cpc&utm_campaign=spine-segmentation-dataset) to discuss your requirements, learn about the price and buy the dataset
# Content
### The dataset includes:
- **ST000001**: includes subfolders with 5 studies. Each study includes MRI-scans in **.dcm and .jpg formats**,
- **DICOMDIR**: includes information about the patient's condition and links to access files,
- **Spine_MRI_3.pdf**: includes medical report, provided by the radiologist,
- **.csv file**: includes id of the studies and the number of files
### Medical reports include the following data:
- Patient's **demographic information**,
- **Description** of the case,
- Preliminary **diagnosis**,
- **Recommendations** on the further actions
*All patients consented to the publication of data*
# Medical data might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market/spine-mri?utm_source=huggingface&utm_medium=cpc&utm_campaign=spine-segmentation-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
*keywords: mri spine scans, spinal imaging, radiology dataset, neuroimaging, medical imaging data, image segmentation, lumbar spine mri, thoracic spine mri, cervical spine mri, spine anatomy, spinal cord mri, orthopedic imaging, radiologist dataset, mri scan analysis, spine mri dataset, machine learning medical imaging, spinal abnormalities, image classification, neural network spine scans, mri data analysis, deep learning medical imaging, mri image processing, spine tumor detection, spine injury diagnosis, mri image segmentation, spine mri classification, artificial intelligence in radiology, spine abnormalities detection, spine pathology analysis, mri feature extraction.* |
rishabh99/statewise-data | ---
license: apache-2.0
---
|
adalib/starcoder-apis-0 | ---
dataset_info:
features:
- name: code
dtype: string
- name: apis
sequence: string
- name: extract_api
dtype: string
splits:
- name: train
num_bytes: 12965986266
num_examples: 1589922
download_size: 4517542022
dataset_size: 12965986266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
facet/generic_humans_synthetic | ---
dataset_info:
features:
- name: file_path
dtype: string
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 2581524871.0
num_examples: 23243
download_size: 2576957819
dataset_size: 2581524871.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Thiago32/1drauziovirella | ---
license: openrail
---
|
hmao/cvecpe_multiapi_v0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: fncall
sequence: string
- name: generated_question
dtype: string
splits:
- name: train
num_bytes: 10817
num_examples: 25
download_size: 8627
dataset_size: 10817
---
# Dataset Card for "cvecpe_multiapi_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_rte_linking_relcl | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 78520
num_examples: 159
- name: train
num_bytes: 88580
num_examples: 177
download_size: 118127
dataset_size: 167100
---
# Dataset Card for "MULTI_VALUE_rte_linking_relcl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ttaront/cc_wet_v2 | ---
configs:
- config_name: cc-main-2021-04
data_files: "CC-MAIN-2021-04/*.parquet"
pretty_name: cc_wet_v2
language:
- ja
--- |
Siqueira12/fvvdv | ---
license: openrail
---
|
ChunB1/OpenOrca_cot_only | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 241672336.03052416
num_examples: 141695
download_size: 65830797
dataset_size: 241672336.03052416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/squad_wrong_rare_v4_train_30_eval_10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 546548
num_examples: 368
- name: validation
num_bytes: 50213
num_examples: 50
download_size: 105441
dataset_size: 596761
---
# Dataset Card for "squad_wrong_rare_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eyupbarlas/intelli-maintain-dummy-tickets | ---
license: mit
task_categories:
- text-generation
size_categories:
- n<1K
---
# Dataset Card for IntelliMaintain Dummy Tickets |
stjiris/IRIS_sts | ---
pretty_name: IRIS Legal Dataset
annotations_creators:
- automated
language_creators:
- found
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K>n
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
---


Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# Portuguese Legal Sentences
Collection of Legal Sentences pairs from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for Semantic Textual Similarity
- Values from 0-1: random sentences across documents
- Values from 2-4: sentences from the same summary (implying some level of entailment)
- Values from 4-5: sentences pairs generated through OpenAi' text-davinci-003 ("Escreve por outras palavras:\n\Entrada:\n"+originalQuery + "\Saída: \n")
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
``` |
one-sec-cv12/chunk_137 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 21919018032.875
num_examples: 228209
download_size: 19725752844
dataset_size: 21919018032.875
---
# Dataset Card for "chunk_137"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
freddyaboulton/gradio-theme-subdomains | ---
license: mit
---
|
titanite09/SkinDiseaseDataset | ---
license: apache-2.0
task_categories:
- image-classification
language:
- en
tags:
- medical
size_categories:
- 10K<n<100K
--- |
kristinashemet/Answer_from_text_Part1_23.03 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Question
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 646028
num_examples: 372
download_size: 88209
dataset_size: 646028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rntc/blurb_bc2gm_a | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: type
dtype: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
splits:
- name: train
num_bytes: 95598848
num_examples: 12574
- name: validation
num_bytes: 18151512
num_examples: 2519
- name: test
num_bytes: 36511145
num_examples: 5038
download_size: 23741909
dataset_size: 150261505
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
minh21/COVID-QA-testset-biencoder-data-75_25 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 48986357
num_examples: 513
download_size: 8353824
dataset_size: 48986357
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-testset-biencoder-data-75_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Multimodal-Fatima/OxfordFlowers_test_embeddings | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: int64
- name: vision_embeddings
sequence: float32
splits:
- name: openai_clip_vit_large_patch14
num_bytes: 279299546.0
num_examples: 6149
download_size: 283131238
dataset_size: 279299546.0
---
# Dataset Card for "OxfordFlowers_test_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/ah_duo_theapothecarydiaries | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Ah-Duo (The Apothecary Diaries)
This is the dataset of Ah-Duo (The Apothecary Diaries), containing 53 images and their tags.
The core tags of this character are `long_hair, purple_hair, earrings, purple_eyes, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 53 | 42.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ah_duo_theapothecarydiaries/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 53 | 42.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ah_duo_theapothecarydiaries/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 107 | 74.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ah_duo_theapothecarydiaries/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ah_duo_theapothecarydiaries',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blue_sky, cloud, day, outdoors, japanese_clothes, jewelry, solo, closed_mouth, hair_bun, hanfu, upper_body |
| 1 | 6 |  |  |  |  |  | 1girl, hanfu, jewelry, looking_at_viewer, solo, 1boy, long_sleeves, male_focus, smile, indoors, japanese_clothes |
| 2 | 8 |  |  |  |  |  | 1girl, jewelry, solo, looking_at_viewer, portrait, parted_lips |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_sky | cloud | day | outdoors | japanese_clothes | jewelry | solo | closed_mouth | hair_bun | hanfu | upper_body | looking_at_viewer | 1boy | long_sleeves | male_focus | smile | indoors | portrait | parted_lips |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:------|:-----------|:-------------------|:----------|:-------|:---------------|:-----------|:--------|:-------------|:--------------------|:-------|:---------------|:-------------|:--------|:----------|:-----------|:--------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | | X | X | X | | | X | | X | X | X | X | X | X | | |
| 2 | 8 |  |  |  |  |  | X | | | | | | X | X | | | | | X | | | | | | X | X |
|
liuyanchen1015/MULTI_VALUE_sst2_if_would | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 3563
num_examples: 24
- name: test
num_bytes: 10494
num_examples: 65
- name: train
num_bytes: 105964
num_examples: 811
download_size: 58246
dataset_size: 120021
---
# Dataset Card for "MULTI_VALUE_sst2_if_would"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datajuicer/redpajama-stack-code-refined-by-data-juicer | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- data-juicer
- pretraining
size_categories:
- 10M<n<100M
---
# RedPajama & TheStack -- Github Code (refined by Data-Juicer)
A refined version of Github Code dataset in RedPajama & TheStack by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
**Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/redpajama-stack-code-refine-result.jsonl) (About 232GB).
## Dataset Information
- Number of samples: 49,279,344 (Keep ~52.09% from the original dataset)
## Refining Recipe
### RedPajama code refinement
```yaml
# global parameters
project_name: 'Data-Juicer-recipes-code-rp'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- clean_copyright_mapper:
- alphanumeric_filter:
tokenization: False
min_ratio: 0.4
max_ratio: 0.8
- alphanumeric_filter:
tokenization: True
min_ratio: 1.5
max_ratio: 3
- average_line_length_filter:
min_len: 15
max_len: 100
- character_repetition_filter:
rep_len: 10
min_ratio: 0.05
max_ratio: 0.3
- maximum_line_length_filter:
min_len: 50
max_len: 500
- text_length_filter:
min_len: 300
- words_num_filter:
lang: en
tokenization: False
min_num: 30
max_num: 5000
- word_repetition_filter:
lang: en
tokenization: False
rep_len: 10
max_ratio: 0.1
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
```
### TheStack code refinement (only max_stars_count >= 20)
```yaml
# global parameters
project_name: 'Data-Juicer-recipes-the-stack'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
text_key: 'content'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- clean_copyright_mapper:
- alphanumeric_filter: # 18766
tokenization: false
min_ratio: 0.2 # < 3sigma (0.3791)
max_ratio: 0.9163 # 3sigma
- alphanumeric_filter: # 146432
tokenization: true
min_ratio: 0.546 # 3sigma
max_ratio: 3.65 # 3sigma
- average_line_length_filter: # for code
min_len: 10 # > 3sigma (0) -- 48790
max_len: 150 # < 3sigma (15603) -- 233275
- character_repetition_filter:
max_ratio: 0.36 # 3sigma -- 346875
- maximum_line_length_filter: # for code
max_len: 1000 # remove 256670 samples
- text_length_filter:
max_len: 96714 # 3sigma -- 190006
- words_num_filter:
min_num: 20 # remove 1504958 samples
max_num: 6640 # 3sigma -- remove 179847 samples
- word_repetition_filter:
rep_len: 10
max_ratio: 0.357 # 3sigma -- 598462
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
```
### Merge and Remove Duplicate Samples
```yaml
project_name: 'Data-Juicer-recipes-code'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl' # path to your dataset result file
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
``` |
cestwc/sample | ---
dataset_info:
features:
- name: Unique Case Record Key
dtype: int64
- name: Description
dtype: string
- name: Subject
dtype: string
- name: Reporting Sub Category
dtype:
class_label:
names:
'0': CRC Issues
'1': Pedestrian Paths/POB/Linkway - Requests
'2': Parks Infra - Others
'3': Cockroaches
'4': EPS Malfunction
'5': Illegal Parking - Public Housing Disabled Lots
'6': Abandoned Bulky Items/Dumping
'7': Idling Engines
'8': Illegal Parking - Public Housing General Reserved Lots
'9': Noise - Events
'10': Illegal Activities - Others
'11': Hoarding
'12': Tree Removal
'13': Public Housing Infra - Others
'14': Rodents
'15': Playground/Fitness Equipment - Public Housing
'16': Grass Cutting
'17': Public Toilet Issues
'18': Hazardous Toxic
'19': Water Supply and Pressure - Other Public Areas
'20': Smell/Smoky - Food Establishments/Cooking
'21': Illegal Parking - Roads
'22': Illegal Parking - Public Housing Loading and Unloading Bays
'23': Water - Others
'24': Graffiti/Stains
'25': Obstruction of Public Accessibility by Articles
'26': PMDs/PABs/Bicycles Usage Issues
'27': Dust/Smell/Light - Construction
'28': BCA - Building and Construction Matters
'29': Flooding/Ponding
'30': Smell - Drains/Canals/Sewer/Manhole
'31': Noise - Food Establishments/Entertainment Outlets
'32': Bus Shelters - Maintenance
'33': CCTV Issues
'34': Neighbour Disputes
'35': Dogs - Nuisance
'36': Dust - Others
'37': Electricity Supply
'38': Obstruction - Public Housing Common Areas
'39': Pollution - Others
'40': Noise - Others
'41': Noise - Renovation
'42': Water Pipe Maintenance and Issues - Public Housing
'43': Tree Planting
'44': Car Park - Maintenance
'45': Roads/Structures - Maintenance
'46': Street Lights - Maintenance
'47': Dirty Areas/Litter - Other Public Areas
'48': Illegal Advertisements
'49': High Rise Littering/Killer Litter
'50': Lift - Others
'51': Birds - Nuisance
'52': Ceiling Leak
'53': Animals - Others
'54': Traffic Lights - Maintenance
'55': Connectivity Related Infrastructure - Others
'56': Illegal Parking - Motorcycles at Public Housing Common Areas
'57': Pedestrian Crossings
'58': Electrical - Others
'59': Cats - Nuisance
'60': Noise - Construction
'61': Sewer - Other Public Areas
'62': Spalling Concrete - Public Housing Common Areas
'63': Wall Seepage
'64': Urine/Faeces/Spitting
'65': Sewer - Public Housing
'66': Noise - Neighbours
'67': Bees/Wasps/Hornets
'68': Lift - Breakdown
'69': Dead Animals/Birds
'70': Tree/Shrub Maintenance
'71': Corridor Lighting
'72': Bus Shelters - Requests
'73': Car Park - Requests
'74': Drains/Drainage - Public Housing
'75': Spalling Concrete - Within HDB Flat
'76': Dirty Areas/Litter - Public Housing
'77': Water Pipe Maintenance and Issues - Other Public Areas
'78': Traffic Lights - Requests
'79': Waste Pipe Defects - Public Housing
'80': Waste and Recycling Management
'81': Fallen Tree/Branch
'82': Infra - Others
'83': Building Defects
'84': Wet Laundry
'85': Illegal Parking - Heavy Vehicle Parking at Public Housing
'86': Outdoor Lighting
'87': Dirty Drains/Canals
'88': Pedestrian Paths/POB/Linkway - Maintenance
'89': Noise - Congregation in Common Areas
'90': Smoking
'91': Bins/Recycling
'92': Road Works
'93': Illegal Parking - Serious Obstruction
'94': Smell - Other Sources
'95': Pests - Others
'96': Road Signs - Maintenance
'97': Water Quality - Other Public Areas
'98': Air Pollution/Smoke
'99': Drains/Drainage - Other Public Areas
'100': Illegal Parking - Public Housing Car Parks/Service Roads
'101': Mosquitoes
'102': Water Supply and Pressure - Public Housing
'103': Parks Infra - Lighting
- name: Reporting Category
dtype:
class_label:
names:
'0': Cleanliness
'1': Enforcement Matters
'2': Pests
'3': Pollution
'4': General Infrastructure/Facilities
'5': Public Housing Lifts
'6': Connectivity Related Infrastructure
'7': Animals and Birds
'8': Public Housing Infrastructure (Excl Lifts)
'9': Greenery
'10': Neighbour Issues
'11': Illegal Parking
'12': Noise
- name: Preprocessed
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
dtype: int64
- name: tags
sequence:
sequence: float64
splits:
- name: '2017'
num_bytes: 33338600
num_examples: 45675
download_size: 16008523
dataset_size: 33338600
configs:
- config_name: default
data_files:
- split: '2017'
path: data/2017-*
---
# Dataset Card for "sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gg-ai/es-2811-no-demoji-m | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: text
dtype: string
- name: clean_text
dtype: string
- name: sent
dtype: int64
splits:
- name: train
num_bytes: 5835720
num_examples: 16694
- name: test
num_bytes: 1241079
num_examples: 3547
- name: val
num_bytes: 213642
num_examples: 626
download_size: 4538232
dataset_size: 7290441
---
# Dataset Card for "es-2811-no-demoji-m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
florentgbelidji/oa_german | ---
license: apache-2.0
dataset_info:
features:
- name: conversation_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: lang_original
dtype: string
- name: could_be_code
dtype: bool
splits:
- name: train_english
num_bytes: 29675151
num_examples: 18192
- name: train_german
num_bytes: 28931906
num_examples: 18192
download_size: 21854409
dataset_size: 58607057
configs:
- config_name: default
data_files:
- split: train_english
path: data/train_english-*
- split: train_german
path: data/train_german-*
---
|
TrainingDataPro/bald-people-segmentation-dataset | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-segmentation
language:
- en
tags:
- code
- medical
---
# Bald People Segmentation Dataset
The dataset consists of images of bald people and corresponding segmentation masks.
Segmentation masks highlight the regions of the images that delineate the bald scalp. By using these segmentation masks, researchers and practitioners can focus only on the areas of interest.
The dataset is designed to be accessible and easy to use, providing high-resolution images and corresponding segmentation masks in PNG format.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market/bald-spots-of-men-and-women?utm_source=huggingface&utm_medium=cpc&utm_campaign=bald-people-segmentation-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The dataset includes 2 folders:
- **Female** - the folder includes folders corresponding to each woman in the sample. Each of the subfolders contains of top images of women's heads and segmentation masks for the original photos.
- **Male** - the folder includes folders corresponding to each man in the sample. Each of the subfolders contains of front and top images of men's heads from and segmentation masks for the original photos.

### File with the extension .csv
- **link**: link to access the media file,
- **type**: type of the image,
- **gender**: gender of the person in the photo
# Bald People Segmentation might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market/bald-spots-of-men-and-women?utm_source=huggingface&utm_medium=cpc&utm_campaign=bald-people-segmentation-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
victunes/nart-100k-synthetic-buddy-mixed-names | ---
license: openrail
task_categories:
- text-generation
language:
- en
tags:
- croissant
pretty_name: Synthetic Therapy Conversations
size_categories:
- 10K<n<100K
---
Dataset Modifications
- Renamed the patient with all these names: https://github.com/dominictarr/random-name/blob/master/names.txt
- Renamed the therapist with "Buddy"
Modification Script is included in the repo
Original dataset card: https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic
Keep in mind that this dataset is entirely synthetic. It is not fully representative of real therapy situations. If you are training an LLM therapist keep in mind the limitations of LLMs and highlight those limitations to users in a responsible manner. |
Multimodal-Fatima/DTD_parition1_test_facebook_opt_2.7b_Visclues_ns_1880_random | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_1_bs_16
num_bytes: 92562783.0
num_examples: 1880
- name: fewshot_3_bs_16
num_bytes: 93877509.0
num_examples: 1880
download_size: 182697658
dataset_size: 186440292.0
---
# Dataset Card for "DTD_parition1_test_facebook_opt_2.7b_Visclues_ns_1880_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juliensimon/food102 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: apple_pie
1: baby_back_ribs
2: baklava
3: beef_carpaccio
4: beef_tartare
5: beet_salad
6: beignets
7: bibimbap
8: boeuf_bourguignon
9: bread_pudding
10: breakfast_burrito
11: bruschetta
12: caesar_salad
13: cannoli
14: caprese_salad
15: carrot_cake
16: ceviche
17: cheese_plate
18: cheesecake
19: chicken_curry
20: chicken_quesadilla
21: chicken_wings
22: chocolate_cake
23: chocolate_mousse
24: churros
25: clam_chowder
26: club_sandwich
27: crab_cakes
28: creme_brulee
29: croque_madame
30: cup_cakes
31: deviled_eggs
32: donuts
33: dumplings
34: edamame
35: eggs_benedict
36: escargots
37: falafel
38: filet_mignon
39: fish_and_chips
40: foie_gras
41: french_fries
42: french_onion_soup
43: french_toast
44: fried_calamari
45: fried_rice
46: frozen_yogurt
47: garlic_bread
48: gnocchi
49: greek_salad
50: grilled_cheese_sandwich
51: grilled_salmon
52: guacamole
53: gyoza
54: hamburger
55: hot_and_sour_soup
56: hot_dog
57: huevos_rancheros
58: hummus
59: ice_cream
60: lasagna
61: lobster_bisque
62: lobster_roll_sandwich
63: macaroni_and_cheese
64: macarons
65: miso_soup
66: mussels
67: nachos
68: omelette
69: onion_rings
70: oysters
71: pad_thai
72: paella
73: pancakes
74: panna_cotta
75: peking_duck
76: pho
77: pizza
78: pork_chop
79: poutine
80: prime_rib
81: pulled_pork_sandwich
82: ramen
83: ravioli
84: red_velvet_cake
85: risotto
86: samosa
87: sashimi
88: scallops
89: seaweed_salad
90: shrimp_and_grits
91: spaghetti_bolognese
92: spaghetti_carbonara
93: spring_rolls
94: steak
95: strawberry_shortcake
96: sushi
97: tacos
98: takoyaki
99: tiramisu
100: tuna_tartare
101: waffles
splits:
- name: test
num_bytes: 1461368965.25
num_examples: 25500
- name: train
num_bytes: 4285789478.25
num_examples: 76500
download_size: 5534173074
dataset_size: 5747158443.5
---
# Dataset Card for "food102"
This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model.
A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
|
KK1mo/tedigan_gen_2 | ---
dataset_info:
features:
- name: id
dtype: string
- name: caption
dtype: string
- name: generated_image
dtype: image
splits:
- name: train
num_bytes: 59174541.0
num_examples: 500
download_size: 59159259
dataset_size: 59174541.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
napatswift/th-txt-img | ---
annotations_creators:
- machine-generated
language:
- th
language_creators:
- machine-generated
license:
- cc
multilinguality:
- monolingual
pretty_name: ThaiTextImage
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- 500k
task_categories:
- image-to-text
task_ids:
- image-captioning
---
|
Haodon/Cylonix_ASR_CV | ---
license: apache-2.0
---
|
griffin/progressive_summarization_test | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: summarize_prompt
dtype: string
- name: progressive_prompt
dtype: string
splits:
- name: test
num_bytes: 1301120
num_examples: 100
download_size: 813617
dataset_size: 1301120
---
# Dataset Card for "progressive_summarization_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Xhaheen/Urdu_Shaheen_Alpaca_cleaned_filtered | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: urdu_instruction
dtype: string
- name: urdu_input
dtype: string
- name: urdu_output
dtype: string
splits:
- name: train
num_bytes: 81528460
num_examples: 28910
download_size: 40992186
dataset_size: 81528460
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_NeuralNovel__Ignis-7B-DPO | ---
pretty_name: Evaluation run of NeuralNovel/Ignis-7B-DPO
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [NeuralNovel/Ignis-7B-DPO](https://huggingface.co/NeuralNovel/Ignis-7B-DPO) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NeuralNovel__Ignis-7B-DPO\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-29T12:08:32.687568](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Ignis-7B-DPO/blob/main/results_2024-02-29T12-08-32.687568.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.591254836807939,\n\
\ \"acc_stderr\": 0.0334070366622428,\n \"acc_norm\": 0.5955459854243185,\n\
\ \"acc_norm_stderr\": 0.03410660590947557,\n \"mc1\": 0.4785801713586291,\n\
\ \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.6545579637974486,\n\
\ \"mc2_stderr\": 0.01605098507055363\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6390784982935154,\n \"acc_stderr\": 0.014034761386175456,\n\
\ \"acc_norm\": 0.6629692832764505,\n \"acc_norm_stderr\": 0.013813476652902276\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6716789484166501,\n\
\ \"acc_stderr\": 0.0046864258512532815,\n \"acc_norm\": 0.8485361481776539,\n\
\ \"acc_norm_stderr\": 0.0035776774950640805\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n\
\ \"acc_stderr\": 0.04244633238353228,\n \"acc_norm\": 0.5925925925925926,\n\
\ \"acc_norm_stderr\": 0.04244633238353228\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.03910525752849723,\n\
\ \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.03910525752849723\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.51,\n\
\ \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n\
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6388888888888888,\n\
\ \"acc_stderr\": 0.04016660030451233,\n \"acc_norm\": 0.6388888888888888,\n\
\ \"acc_norm_stderr\": 0.04016660030451233\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\"\
: 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n\
\ \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n\
\ \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.04951218252396264,\n\
\ \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.04951218252396264\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n\
\ \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5191489361702127,\n \"acc_stderr\": 0.032662042990646796,\n\
\ \"acc_norm\": 0.5191489361702127,\n \"acc_norm_stderr\": 0.032662042990646796\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n\
\ \"acc_stderr\": 0.04630653203366595,\n \"acc_norm\": 0.41228070175438597,\n\
\ \"acc_norm_stderr\": 0.04630653203366595\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137595,\n \"\
acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137595\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.667741935483871,\n\
\ \"acc_stderr\": 0.026795560848122797,\n \"acc_norm\": 0.667741935483871,\n\
\ \"acc_norm_stderr\": 0.026795560848122797\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.58,\n \"acc_stderr\": 0.04960449637488583,\n \"acc_norm\"\
: 0.58,\n \"acc_norm_stderr\": 0.04960449637488583\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n\
\ \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"\
acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8134715025906736,\n \"acc_stderr\": 0.028112091210117457,\n\
\ \"acc_norm\": 0.8134715025906736,\n \"acc_norm_stderr\": 0.028112091210117457\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5512820512820513,\n \"acc_stderr\": 0.02521731518484649,\n \
\ \"acc_norm\": 0.5512820512820513,\n \"acc_norm_stderr\": 0.02521731518484649\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815632,\n \
\ \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815632\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566545,\n\
\ \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566545\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7853211009174312,\n \"acc_stderr\": 0.017604304149256483,\n \"\
acc_norm\": 0.7853211009174312,\n \"acc_norm_stderr\": 0.017604304149256483\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854054,\n \"\
acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854054\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7549019607843137,\n \"acc_stderr\": 0.030190282453501943,\n \"\
acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.030190282453501943\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \
\ \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6412556053811659,\n\
\ \"acc_stderr\": 0.03219079200419996,\n \"acc_norm\": 0.6412556053811659,\n\
\ \"acc_norm_stderr\": 0.03219079200419996\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n\
\ \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"\
acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n\
\ \"acc_stderr\": 0.045723723587374296,\n \"acc_norm\": 0.36607142857142855,\n\
\ \"acc_norm_stderr\": 0.045723723587374296\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.04498676320572924,\n\
\ \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.04498676320572924\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\
\ \"acc_stderr\": 0.022801382534597552,\n \"acc_norm\": 0.8589743589743589,\n\
\ \"acc_norm_stderr\": 0.022801382534597552\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7662835249042146,\n\
\ \"acc_stderr\": 0.015133383278988829,\n \"acc_norm\": 0.7662835249042146,\n\
\ \"acc_norm_stderr\": 0.015133383278988829\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.661849710982659,\n \"acc_stderr\": 0.025469770149400172,\n\
\ \"acc_norm\": 0.661849710982659,\n \"acc_norm_stderr\": 0.025469770149400172\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.264804469273743,\n\
\ \"acc_stderr\": 0.014756906483260664,\n \"acc_norm\": 0.264804469273743,\n\
\ \"acc_norm_stderr\": 0.014756906483260664\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.027121956071388852,\n\
\ \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.027121956071388852\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.639871382636656,\n\
\ \"acc_stderr\": 0.027264297599804015,\n \"acc_norm\": 0.639871382636656,\n\
\ \"acc_norm_stderr\": 0.027264297599804015\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.02686949074481527,\n\
\ \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.02686949074481527\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236848,\n \
\ \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236848\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4152542372881356,\n\
\ \"acc_stderr\": 0.012585471793400664,\n \"acc_norm\": 0.4152542372881356,\n\
\ \"acc_norm_stderr\": 0.012585471793400664\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6094771241830066,\n \"acc_stderr\": 0.019737008998094597,\n \
\ \"acc_norm\": 0.6094771241830066,\n \"acc_norm_stderr\": 0.019737008998094597\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6857142857142857,\n \"acc_stderr\": 0.02971932942241747,\n\
\ \"acc_norm\": 0.6857142857142857,\n \"acc_norm_stderr\": 0.02971932942241747\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7661691542288557,\n\
\ \"acc_stderr\": 0.029929415408348398,\n \"acc_norm\": 0.7661691542288557,\n\
\ \"acc_norm_stderr\": 0.029929415408348398\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \
\ \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4578313253012048,\n\
\ \"acc_stderr\": 0.038786267710023595,\n \"acc_norm\": 0.4578313253012048,\n\
\ \"acc_norm_stderr\": 0.038786267710023595\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072768,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072768\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4785801713586291,\n\
\ \"mc1_stderr\": 0.01748743214471181,\n \"mc2\": 0.6545579637974486,\n\
\ \"mc2_stderr\": 0.01605098507055363\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7995264404104183,\n \"acc_stderr\": 0.01125195828120508\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3305534495830174,\n \
\ \"acc_stderr\": 0.012957496367085028\n }\n}\n```"
repo_url: https://huggingface.co/NeuralNovel/Ignis-7B-DPO
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|arc:challenge|25_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|gsm8k|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hellaswag|10_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-08-32.687568.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-29T12-08-32.687568.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- '**/details_harness|winogrande|5_2024-02-29T12-08-32.687568.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-29T12-08-32.687568.parquet'
- config_name: results
data_files:
- split: 2024_02_29T12_08_32.687568
path:
- results_2024-02-29T12-08-32.687568.parquet
- split: latest
path:
- results_2024-02-29T12-08-32.687568.parquet
---
# Dataset Card for Evaluation run of NeuralNovel/Ignis-7B-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [NeuralNovel/Ignis-7B-DPO](https://huggingface.co/NeuralNovel/Ignis-7B-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NeuralNovel__Ignis-7B-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-29T12:08:32.687568](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Ignis-7B-DPO/blob/main/results_2024-02-29T12-08-32.687568.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.591254836807939,
"acc_stderr": 0.0334070366622428,
"acc_norm": 0.5955459854243185,
"acc_norm_stderr": 0.03410660590947557,
"mc1": 0.4785801713586291,
"mc1_stderr": 0.01748743214471181,
"mc2": 0.6545579637974486,
"mc2_stderr": 0.01605098507055363
},
"harness|arc:challenge|25": {
"acc": 0.6390784982935154,
"acc_stderr": 0.014034761386175456,
"acc_norm": 0.6629692832764505,
"acc_norm_stderr": 0.013813476652902276
},
"harness|hellaswag|10": {
"acc": 0.6716789484166501,
"acc_stderr": 0.0046864258512532815,
"acc_norm": 0.8485361481776539,
"acc_norm_stderr": 0.0035776774950640805
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353228,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353228
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6381578947368421,
"acc_stderr": 0.03910525752849723,
"acc_norm": 0.6381578947368421,
"acc_norm_stderr": 0.03910525752849723
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.02845015479411864,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.02845015479411864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.04016660030451233,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.04016660030451233
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.0373362665538351,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.0373362665538351
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.45098039215686275,
"acc_stderr": 0.04951218252396264,
"acc_norm": 0.45098039215686275,
"acc_norm_stderr": 0.04951218252396264
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5191489361702127,
"acc_stderr": 0.032662042990646796,
"acc_norm": 0.5191489361702127,
"acc_norm_stderr": 0.032662042990646796
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.04630653203366595,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.04630653203366595
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.025010749116137595,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.025010749116137595
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.667741935483871,
"acc_stderr": 0.026795560848122797,
"acc_norm": 0.667741935483871,
"acc_norm_stderr": 0.026795560848122797
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488583,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488583
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7151515151515152,
"acc_stderr": 0.03524390844511781,
"acc_norm": 0.7151515151515152,
"acc_norm_stderr": 0.03524390844511781
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8134715025906736,
"acc_stderr": 0.028112091210117457,
"acc_norm": 0.8134715025906736,
"acc_norm_stderr": 0.028112091210117457
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5512820512820513,
"acc_stderr": 0.02521731518484649,
"acc_norm": 0.5512820512820513,
"acc_norm_stderr": 0.02521731518484649
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.028133252578815632,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.028133252578815632
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566545,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566545
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7853211009174312,
"acc_stderr": 0.017604304149256483,
"acc_norm": 0.7853211009174312,
"acc_norm_stderr": 0.017604304149256483
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03407632093854054,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03407632093854054
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7549019607843137,
"acc_stderr": 0.030190282453501943,
"acc_norm": 0.7549019607843137,
"acc_norm_stderr": 0.030190282453501943
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7510548523206751,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.7510548523206751,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6412556053811659,
"acc_stderr": 0.03219079200419996,
"acc_norm": 0.6412556053811659,
"acc_norm_stderr": 0.03219079200419996
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7024793388429752,
"acc_stderr": 0.04173349148083499,
"acc_norm": 0.7024793388429752,
"acc_norm_stderr": 0.04173349148083499
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094633,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094633
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.045723723587374296,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.045723723587374296
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.04498676320572924,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.04498676320572924
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597552,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597552
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7662835249042146,
"acc_stderr": 0.015133383278988829,
"acc_norm": 0.7662835249042146,
"acc_norm_stderr": 0.015133383278988829
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.661849710982659,
"acc_stderr": 0.025469770149400172,
"acc_norm": 0.661849710982659,
"acc_norm_stderr": 0.025469770149400172
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.264804469273743,
"acc_stderr": 0.014756906483260664,
"acc_norm": 0.264804469273743,
"acc_norm_stderr": 0.014756906483260664
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6601307189542484,
"acc_stderr": 0.027121956071388852,
"acc_norm": 0.6601307189542484,
"acc_norm_stderr": 0.027121956071388852
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.639871382636656,
"acc_stderr": 0.027264297599804015,
"acc_norm": 0.639871382636656,
"acc_norm_stderr": 0.027264297599804015
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.02686949074481527,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.02686949074481527
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236848,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236848
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4152542372881356,
"acc_stderr": 0.012585471793400664,
"acc_norm": 0.4152542372881356,
"acc_norm_stderr": 0.012585471793400664
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6094771241830066,
"acc_stderr": 0.019737008998094597,
"acc_norm": 0.6094771241830066,
"acc_norm_stderr": 0.019737008998094597
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6857142857142857,
"acc_stderr": 0.02971932942241747,
"acc_norm": 0.6857142857142857,
"acc_norm_stderr": 0.02971932942241747
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7661691542288557,
"acc_stderr": 0.029929415408348398,
"acc_norm": 0.7661691542288557,
"acc_norm_stderr": 0.029929415408348398
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653693,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653693
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4578313253012048,
"acc_stderr": 0.038786267710023595,
"acc_norm": 0.4578313253012048,
"acc_norm_stderr": 0.038786267710023595
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072768,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072768
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4785801713586291,
"mc1_stderr": 0.01748743214471181,
"mc2": 0.6545579637974486,
"mc2_stderr": 0.01605098507055363
},
"harness|winogrande|5": {
"acc": 0.7995264404104183,
"acc_stderr": 0.01125195828120508
},
"harness|gsm8k|5": {
"acc": 0.3305534495830174,
"acc_stderr": 0.012957496367085028
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
autoevaluate/autoeval-eval-squad-plain_text-be943f-1842563163 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: ['squad', 'bertscore']
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. |
bb1nfosec/OWASP | ---
license: gpl
---
|
slaqrichi/processed_Cosmic_dataset_V2_inst_format | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 86815
num_examples: 95
download_size: 0
dataset_size: 86815
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_Cosmic_dataset_V2_inst_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random_letter_same_length_find_passage_train30_eval20_title | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 26610
num_examples: 80
- name: validation
num_bytes: 7512
num_examples: 20
download_size: 24572
dataset_size: 34122
---
# Dataset Card for "random_letter_same_length_find_passage_train30_eval20_title"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ovior/twitter_dataset_1713193438 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2432915
num_examples: 7217
download_size: 1398855
dataset_size: 2432915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713105236 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 12152
num_examples: 36
download_size: 13889
dataset_size: 12152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713105236"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
agi-css/sandbox_v1 | ---
license: apache-2.0
---
|
grimu/wikiner-fr-bme | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': LOC
'2': PER
'3': MISC
'4': ORG
- name: one_hot
sequence:
sequence:
sequence: float64
- name: ner_padded
sequence: int64
splits:
- name: train
num_bytes: 91935318915
num_examples: 114215
download_size: 86123839
dataset_size: 91935318915
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Omriy123/Dogs_vs_Cats | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 525901830.0
num_examples: 25000
download_size: 573158859
dataset_size: 525901830.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wilsonslz/MARCIOSOLNASCENTE | ---
license: openrail
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.