datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
matterr/products-10k-test | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1997550994.326
num_examples: 10001
download_size: 2344525315
dataset_size: 1997550994.326
---
# Dataset Card for "products-second-checkpoint"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_80_1713226911 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 146633
num_examples: 350
download_size: 79104
dataset_size: 146633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HydraLM/SkunkData-002-convid-cluster | ---
dataset_info:
features:
- name: unique_conversation_id
dtype: string
- name: cluster
dtype: int32
splits:
- name: train
num_bytes: 89257780
num_examples: 1472917
download_size: 17951475
dataset_size: 89257780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SkunkData-002-convid-cluster"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Jiahuan/teach_action | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 605833565
num_examples: 231272
- name: validation
num_bytes: 76965684
num_examples: 28960
- name: test
num_bytes: 208815535
num_examples: 91903
download_size: 65220573
dataset_size: 891614784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
enimai/MuST-C-and-WMT16-de-en | ---
license: afl-3.0
---
|
datablations/oscar-filter | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: meta
struct:
- name: warc_headers
struct:
- name: warc-record-id
dtype: string
- name: warc-date
dtype: string
- name: content-type
dtype: string
- name: content-length
dtype: int32
- name: warc-type
dtype: string
- name: warc-identified-content-language
dtype: string
- name: warc-refers-to
dtype: string
- name: warc-target-uri
dtype: string
- name: warc-block-digest
dtype: string
- name: identification
struct:
- name: label
dtype: string
- name: prob
dtype: float32
- name: annotations
sequence: string
- name: line_identifications
list:
- name: label
dtype: string
- name: prob
dtype: float32
- name: perplexity_score
dtype: float64
- name: text_length
dtype: int64
- name: url
dtype: string
- name: domain
dtype: string
- name: dup_ratio
dtype: float64
- name: pairs
sequence:
sequence: int64
- name: repetitions
sequence: binary
- name: included_in_dedup
dtype: bool
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 3188486875748
num_examples: 431992659
download_size: 419397499659
dataset_size: 3188486875748
---
this is the one where we build the suffix array for 25% Oscar and only deduplicate that part - by deduplication I mean removing any document which has an at least 100-char span overlapping with another document in the 25% chunk. This is very strict and preserves only about 20 million documents, so less then 5% of the full Oscar. |
HaiLong9901/VietNameseLongTextSum | ---
task_categories:
- summarization
language:
- vi
pretty_name: VietNameseLongTextSum
--- |
Mireu-Lab/NSL-KDD | ---
license: gpl-3.0
tags:
- Network Security
---
# NSL-KDD
> The data set is a data set that converts the arff File provided by the [link](https://www.unb.ca/cic/datasets/nsl.html) into CSV and results.
>
> The data set is personally stored by converting data to float64.
>
> If you want to obtain additional original files, they are organized in the [Original Directory](./Original) in the repo.
## Labels
The label of the data set is as follows.
|#|Column|Non-Null|Count|Dtype|
|---|---|---|---|---|
|0|duration|151165|non-null|int64|
|1|protocol_type|151165|non-null|object|
|2|service|151165|non-null|object|
|3|flag|151165|non-null|object|
|4|src_bytes|151165|non-null|int64|
|5|dst_bytes|151165|non-null|int64|
|6|land|151165|non-null|int64|
|7|wrong_fragment|151165|non-null|int64|
|8|urgent|151165|non-null|int64|
|9|hot|151165|non-null|int64|
|10|num_failed_logins|151165|non-null|int64|
|11|logged_in|151165|non-null|int64|
|12|num_compromised|151165|non-null|int64|
|13|root_shell|151165|non-null|int64|
|14|su_attempted|151165|non-null|int64|
|15|num_root|151165|non-null|int64|
|16|num_file_creations|151165|non-null|int64|
|17|num_shells|151165|non-null|int64|
|18|num_access_files|151165|non-null|int64|
|19|num_outbound_cmds|151165|non-null|int64|
|20|is_host_login|151165|non-null|int64|
|21|is_guest_login|151165|non-null|int64|
|22|count|151165|non-null|int64|
|23|srv_count|151165|non-null|int64|
|24|serror_rate|151165|non-null|float64|
|25|srv_serror_rate|151165|non-null|float64|
|26|rerror_rate|151165|non-null|float64|
|27|srv_rerror_rate|151165|non-null|float64|
|28|same_srv_rate|151165|non-null|float64|
|29|diff_srv_rate|151165|non-null|float64|
|30|srv_diff_host_rate|151165|non-null|float64|
|31|dst_host_count|151165|non-null|int64|
|32|dst_host_srv_count|151165|non-null|int64|
|33|dst_host_same_srv_rate|151165|non-null|float64|
|34|dst_host_diff_srv_rate|151165|non-null|float64|
|35|dst_host_same_src_port_rate|151165|non-null|float64|
|36|dst_host_srv_diff_host_rate|151165|non-null|float64|
|37|dst_host_serror_rate|151165|non-null|float64|
|38|dst_host_srv_serror_rate|151165|non-null|float64|
|39|dst_host_rerror_rate|151165|non-null|float64|
|40|dst_host_srv_rerror_rate|151165|non-null|float64|
|41|class|151165|non-null|float64|
|
arazd/tulu_code_alpaca | ---
license: openrail
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/b91c4397 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1338
dataset_size: 186
---
# Dataset Card for "b91c4397"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
olm/olm-october-2022-tokenized-1024 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 79468727400
num_examples: 12909150
download_size: 21027268683
dataset_size: 79468727400
---
# Dataset Card for "olm-october-2022-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davidfant/natural-questions-chunk-6 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document
struct:
- name: html
dtype: string
- name: title
dtype: string
- name: tokens
sequence:
- name: end_byte
dtype: int64
- name: is_html
dtype: bool
- name: start_byte
dtype: int64
- name: token
dtype: string
- name: url
dtype: string
- name: question
struct:
- name: text
dtype: string
- name: tokens
sequence: string
- name: long_answer_candidates
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: top_level
dtype: bool
- name: annotations
sequence:
- name: id
dtype: string
- name: long_answer
struct:
- name: candidate_index
dtype: int64
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: short_answers
sequence:
- name: end_byte
dtype: int64
- name: end_token
dtype: int64
- name: start_byte
dtype: int64
- name: start_token
dtype: int64
- name: text
dtype: string
- name: yes_no_answer
dtype:
class_label:
names:
'0': 'NO'
'1': 'YES'
splits:
- name: train
num_bytes: 4655306372
num_examples: 10000
download_size: 1805442960
dataset_size: 4655306372
---
# Dataset Card for "natural-questions-chunk-6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eswardivi/Tamil_MSA_Audio_Text_Chunked | ---
dataset_info:
features:
- name: Audio
dtype: audio
- name: label
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: FilePath
dtype: string
- name: Text
dtype: string
splits:
- name: train
num_bytes: 39958223.0
num_examples: 128
download_size: 39793452
dataset_size: 39958223.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/princeton_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of princeton/プリンストン/普林斯顿 (Azur Lane)
This is the dataset of princeton/プリンストン/普林斯顿 (Azur Lane), containing 27 images and their tags.
The core tags of this character are `bangs, long_hair, breasts, pink_hair, pink_eyes, very_long_hair, large_breasts, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 27 | 45.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/princeton_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 27 | 24.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/princeton_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 62 | 49.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/princeton_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 27 | 39.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/princeton_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 62 | 73.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/princeton_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/princeton_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, solo, looking_at_viewer, frilled_bikini, black_bikini, cleavage, collarbone, bare_shoulders, navel, off-shoulder_bikini, blush, medium_breasts, water, closed_mouth, day, outdoors, smile, lying, ocean, open_mouth, sky |
| 1 | 6 |  |  |  |  |  | looking_at_viewer, 1girl, bare_shoulders, cleavage, collarbone, white_shirt, blush, red_bow, smile, solo, belt, blue_skirt, closed_mouth, full_body, short_sleeves, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | frilled_bikini | black_bikini | cleavage | collarbone | bare_shoulders | navel | off-shoulder_bikini | blush | medium_breasts | water | closed_mouth | day | outdoors | smile | lying | ocean | open_mouth | sky | white_shirt | red_bow | belt | blue_skirt | full_body | short_sleeves | simple_background | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:-----------------|:---------------|:-----------|:-------------|:-----------------|:--------|:----------------------|:--------|:-----------------|:--------|:---------------|:------|:-----------|:--------|:--------|:--------|:-------------|:------|:--------------|:----------|:-------|:-------------|:------------|:----------------|:--------------------|:-------------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | | | X | X | X | | | X | | | X | | | X | | | | | X | X | X | X | X | X | X | X |
|
PetroGPT/petro_dataset | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 521973
num_examples: 1958
download_size: 232475
dataset_size: 521973
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AwesomeEmerald/testDataset01 | ---
license: cc0-1.0
---
|
carlosejimenez/seq2seq-qnli | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
- name: orig_idx
dtype: int64
splits:
- name: train
num_bytes: 29173683
num_examples: 104743
- name: validation
num_bytes: 1554164
num_examples: 5463
- name: test
num_bytes: 1542446
num_examples: 5463
download_size: 0
dataset_size: 32270293
---
# Dataset Card for "seq2seq-qnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yankz/reasoning | ---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4247564
num_examples: 1000
download_size: 2250258
dataset_size: 4247564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anjandash/java-8m-methods-v2 | ---
language:
- java
license:
- mit
multilinguality:
- monolingual
pretty_name:
- java-8m-methods-v2
--- |
CyberHarem/i_58_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of i_58/伊58/伊58 (Kantai Collection)
This is the dataset of i_58/伊58/伊58 (Kantai Collection), containing 500 images and their tags.
The core tags of this character are `short_hair, ahoge, pink_hair, hair_ornament, pink_eyes, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 375.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_58_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 265.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_58_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1024 | 534.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_58_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 351.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_58_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1024 | 671.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/i_58_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/i_58_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 10 |  |  |  |  |  | 1girl, blue_one-piece_swimsuit, blue_sailor_collar, looking_at_viewer, school_swimsuit, serafuku, solo, swimsuit_under_clothes, cowboy_shot, one-hour_drawing_challenge, pink_neckerchief, pink_necktie, sailor_shirt, simple_background, twitter_username, white_background, dated |
| 1 | 35 |  |  |  |  |  | 1girl, school_swimsuit, serafuku, solo, swimsuit_under_clothes, looking_at_viewer, torpedo, one-piece_swimsuit, smile, blush, open_mouth |
| 2 | 5 |  |  |  |  |  | 1girl, air_bubble, school_swimsuit, serafuku, solo, swimsuit_under_clothes, underwater, one-piece_swimsuit, torpedo, blush, open_mouth, smile, fish, looking_at_viewer |
| 3 | 12 |  |  |  |  |  | school_swimsuit, serafuku, swimsuit_under_clothes, 2girls, long_hair, open_mouth, blush, one-piece_swimsuit |
| 4 | 9 |  |  |  |  |  | 1girl, blush, hetero, one-piece_swimsuit, penis, school_swimsuit, solo_focus, open_mouth, sex, swimsuit_aside, vaginal, 1boy, cum_in_pussy, nipples, small_breasts, spread_legs, bar_censor, serafuku, mosaic_censoring, shirt_lift, swimsuit_under_clothes, tears, torn_clothes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_one-piece_swimsuit | blue_sailor_collar | looking_at_viewer | school_swimsuit | serafuku | solo | swimsuit_under_clothes | cowboy_shot | one-hour_drawing_challenge | pink_neckerchief | pink_necktie | sailor_shirt | simple_background | twitter_username | white_background | dated | torpedo | one-piece_swimsuit | smile | blush | open_mouth | air_bubble | underwater | fish | 2girls | long_hair | hetero | penis | solo_focus | sex | swimsuit_aside | vaginal | 1boy | cum_in_pussy | nipples | small_breasts | spread_legs | bar_censor | mosaic_censoring | shirt_lift | tears | torn_clothes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------------|:---------------------|:--------------------|:------------------|:-----------|:-------|:-------------------------|:--------------|:-----------------------------|:-------------------|:---------------|:---------------|:--------------------|:-------------------|:-------------------|:--------|:----------|:---------------------|:--------|:--------|:-------------|:-------------|:-------------|:-------|:---------|:------------|:---------|:--------|:-------------|:------|:-----------------|:----------|:-------|:---------------|:----------|:----------------|:--------------|:-------------|:-------------------|:-------------|:--------|:---------------|
| 0 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 35 |  |  |  |  |  | X | | | X | X | X | X | X | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | X | X | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | | | | | X | X | | X | | | | | | | | | | | X | | X | X | | | | X | X | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | | | | X | X | | X | | | | | | | | | | | X | | X | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
luna-code/starcoderdata-apis | ---
dataset_info:
features:
- name: code
dtype: string
- name: apis
sequence: string
- name: extract_api
dtype: string
splits:
- name: train
num_bytes: 38947343551
num_examples: 4772871
download_size: 13555618994
dataset_size: 38947343551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ovior/twitter_dataset_1713145676 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2597403
num_examples: 8075
download_size: 1456618
dataset_size: 2597403
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lewtun/music_classification | ---
license: unknown
---
|
allenai/cochrane_dense_mean | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
---
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
- __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
- __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9`
Retrieval results on the `train` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7790 | 0.4487 | 0.3438 | 0.4800 |
Retrieval results on the `validation` set:
| Recall@100 | Rprec | Precision@k | Recall@k |
| ----------- | ----------- | ----------- | ----------- |
| 0.7856 | 0.4424 | 0.3534 | 0.4913 |
Retrieval results on the `test` set:
N/A. Test set is blind so we do not have any queries. |
newguyme/flir_paired_captioned | ---
dataset_info:
features:
- name: rgb
dtype: image
- name: ir
dtype: image
- name: image_caption
dtype: string
- name: empty_caption
dtype: string
splits:
- name: train
num_bytes: 504462163.875
num_examples: 4113
download_size: 504281958
dataset_size: 504462163.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vested-Sigil/ezmode | ---
license: creativeml-openrail-m
---
|
mesolitica/nusantara-audiobook | ---
language:
- ms
task_categories:
- automatic-speech-recognition
- text-to-speech
---
# Pseudolabel Nusantara audiobooks using Whisper Large V3
Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/nusantara-audiobook
1. Split based on 3 utterances using WebRTC VAD.
## how-to
Download files,
```bash
wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/dari-pasentran-ke-istana.gz
wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/salina.gz
wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/turki.gz
wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/nusantara-audiobook-part1.json
wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/nusantara-audiobook-part2.json
tar -xf dari-pasentran-ke-istana.gz
tar -xf turki.gz
tar -xf salina.gz
``` |
skrishna/allenai-real-toxicity-prompts_160M_toxic | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 47932
num_examples: 100
- name: test
num_bytes: 24438
num_examples: 50
download_size: 36379
dataset_size: 72370
---
# Dataset Card for "allenai-real-toxicity-prompts_160M_toxic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NobodyExistsOnTheInternet/ToxicDPOqa | ---
license: mit
dataset_info:
features:
- name: majortopic
dtype: string
- name: topic
dtype: string
- name: subtopics
dtype: string
- name: prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: question
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 51962752
num_examples: 6866
download_size: 25348482
dataset_size: 51962752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- not-for-all-audiences
---
Same formatting as https://huggingface.co/datasets/Intel/orca_dpo_pairs
Use with
```
datasets:
- path: NobodyExistsOnTheInternet/ToxicDPOqa
split: train
type: intel_apply_chatml
```
in axolotl.
Use only for Alignment research. NEOTI is not responsible for what you might do with it. |
en0c/km-shorts | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 173849408.0
num_examples: 45
download_size: 157813790
dataset_size: 173849408.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "km-shorts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Avatari/munie | ---
license: openrail
---
|
TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 198995934.614
num_examples: 2006
download_size: 179378562
dataset_size: 198995934.614
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Corpus
This dataset is built from Magicdata [ASR-CZDIACSC: A CHINESE ZHENGZHOU DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/zhengzhou-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Zhengzhou_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Deojoandco/dialogturns_not_generated_test | ---
dataset_info:
features:
- name: url
dtype: string
- name: id
dtype: string
- name: num_comments
dtype: int64
- name: name
dtype: string
- name: title
dtype: string
- name: body
dtype: string
- name: score
dtype: int64
- name: upvote_ratio
dtype: float64
- name: distinguished
dtype: string
- name: over_18
dtype: bool
- name: created_utc
dtype: int64
- name: comments
list:
- name: body
dtype: string
- name: created_utc
dtype: float64
- name: distinguished
dtype: string
- name: id
dtype: string
- name: permalink
dtype: string
- name: score
dtype: int64
- name: best_num_comments
dtype: int64
- name: query
dtype: string
- name: dialog
dtype: string
- name: annotation_success
dtype: bool
- name: annotation_text
dtype: string
- name: turns_generated
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 883579
num_examples: 34
download_size: 561093
dataset_size: 883579
---
# Dataset Card for "dialogturns_not_generated_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lyimo/badili-swahili | ---
license: mit
---
|
alvations/towerbl0cks-w3ird-pr0mpt5 | ---
dataset_info:
features:
- name: weird_prompts
dtype: string
splits:
- name: train
num_bytes: 7331206
num_examples: 62954
download_size: 1894354
dataset_size: 7331206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ali-Maq/pubmed_mm_Data | ---
license: apache-2.0
---
|
Nexdata/895_Fire_Videos_Data | ---
license: cc-by-nc-nd-4.0
---
## Description
895 Fire Videos Data,the total duration of videos is 27 hours 6 minutes 48.58 seconds. The dataset adpoted different cameras to shoot fire videos. The shooting time includes day and night.The dataset can be used for tasks such as fire detection.
For more details, please refer to the link: https://www.nexdata.ai/dataset/92?source=Huggingface
## Data size
895 videos, the total duration is 27 hours 6 minutes 48.58 seconds
## Collecting environment
including indoor and outdoor scenes
## Data diversity
multiple scenes, different time periods
## Device
cameras(Samsung, Hikivison and Axis)
## Collecting angles
looking down angle
## Data format
the video data format is .avi
## Collecting content
collecting fire videos in different scenes
## Accuracy
the accuracy of labels of collecting time, collecting scene, occlusion, collection distance, collecting device is not less than 97%
# Licensing Information
Commercial License
|
AdapterOcean/data-standardized_cluster_10_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 17987323
num_examples: 8364
download_size: 7558580
dataset_size: 17987323
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-standardized_cluster_10_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/fwv2_random_num_train_10000_eval_100 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_doc2id
path: data/train_doc2id-*
- split: train_id2doc
path: data/train_id2doc-*
- split: train_find_word
path: data/train_find_word-*
- split: eval_find_word
path: data/eval_find_word-*
- split: id_context_mapping
path: data/id_context_mapping-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1909652
num_examples: 20100
- name: train_doc2id
num_bytes: 857494
num_examples: 10100
- name: train_id2doc
num_bytes: 887794
num_examples: 10100
- name: train_find_word
num_bytes: 1021858
num_examples: 10000
- name: eval_find_word
num_bytes: 10346
num_examples: 100
- name: id_context_mapping
num_bytes: 564594
num_examples: 10100
download_size: 2074803
dataset_size: 5251738
---
# Dataset Card for "fwv2_random_num_train_10000_eval_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Nitral-AI__Nyan-Stunna-7B | ---
pretty_name: Evaluation run of Nitral-AI/Nyan-Stunna-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Nitral-AI/Nyan-Stunna-7B](https://huggingface.co/Nitral-AI/Nyan-Stunna-7B) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Nitral-AI__Nyan-Stunna-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-07T04:44:12.361075](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Nyan-Stunna-7B/blob/main/results_2024-04-07T04-44-12.361075.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6470185192425486,\n\
\ \"acc_stderr\": 0.03219002860343247,\n \"acc_norm\": 0.6496484281299122,\n\
\ \"acc_norm_stderr\": 0.032831744184837246,\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.017297421448534727,\n \"mc2\": 0.6006477207402555,\n\
\ \"mc2_stderr\": 0.015322634627558495\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6399317406143344,\n \"acc_stderr\": 0.014027516814585188,\n\
\ \"acc_norm\": 0.6638225255972696,\n \"acc_norm_stderr\": 0.01380485502620576\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6702848038239394,\n\
\ \"acc_stderr\": 0.00469148881303216,\n \"acc_norm\": 0.8556064528978291,\n\
\ \"acc_norm_stderr\": 0.0035076999350742376\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.04960449637488583,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.04960449637488583\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493864,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n\
\ \"acc_stderr\": 0.035868792800803406,\n \"acc_norm\": 0.7569444444444444,\n\
\ \"acc_norm_stderr\": 0.035868792800803406\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340355,\n\
\ \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340355\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370332,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370332\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"\
acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7516129032258064,\n\
\ \"acc_stderr\": 0.024580028921481003,\n \"acc_norm\": 0.7516129032258064,\n\
\ \"acc_norm_stderr\": 0.024580028921481003\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\"\
: 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"\
acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944856,\n \
\ \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944856\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6681614349775785,\n\
\ \"acc_stderr\": 0.03160295143776679,\n \"acc_norm\": 0.6681614349775785,\n\
\ \"acc_norm_stderr\": 0.03160295143776679\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n\
\ \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n\
\ \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n\
\ \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\
\ \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n\
\ \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37318435754189944,\n\
\ \"acc_stderr\": 0.016175692013381957,\n \"acc_norm\": 0.37318435754189944,\n\
\ \"acc_norm_stderr\": 0.016175692013381957\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n\
\ \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7041800643086816,\n\
\ \"acc_stderr\": 0.025922371788818767,\n \"acc_norm\": 0.7041800643086816,\n\
\ \"acc_norm_stderr\": 0.025922371788818767\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n\
\ \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n\
\ \"acc_stderr\": 0.012744149704869647,\n \"acc_norm\": 0.4680573663624511,\n\
\ \"acc_norm_stderr\": 0.012744149704869647\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6715686274509803,\n \"acc_stderr\": 0.01899970738316267,\n \
\ \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.01899970738316267\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.0282638899437846,\n\
\ \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.0282638899437846\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n\
\ \"acc_stderr\": 0.025196929874827072,\n \"acc_norm\": 0.8507462686567164,\n\
\ \"acc_norm_stderr\": 0.025196929874827072\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160886,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160886\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.423500611995104,\n\
\ \"mc1_stderr\": 0.017297421448534727,\n \"mc2\": 0.6006477207402555,\n\
\ \"mc2_stderr\": 0.015322634627558495\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7908445146014207,\n \"acc_stderr\": 0.011430450045881578\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5572403335860501,\n \
\ \"acc_stderr\": 0.013681937191764627\n }\n}\n```"
repo_url: https://huggingface.co/Nitral-AI/Nyan-Stunna-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|arc:challenge|25_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|gsm8k|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hellaswag|10_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-07T04-44-12.361075.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-07T04-44-12.361075.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- '**/details_harness|winogrande|5_2024-04-07T04-44-12.361075.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-07T04-44-12.361075.parquet'
- config_name: results
data_files:
- split: 2024_04_07T04_44_12.361075
path:
- results_2024-04-07T04-44-12.361075.parquet
- split: latest
path:
- results_2024-04-07T04-44-12.361075.parquet
---
# Dataset Card for Evaluation run of Nitral-AI/Nyan-Stunna-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Nitral-AI/Nyan-Stunna-7B](https://huggingface.co/Nitral-AI/Nyan-Stunna-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Nitral-AI__Nyan-Stunna-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-07T04:44:12.361075](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Nyan-Stunna-7B/blob/main/results_2024-04-07T04-44-12.361075.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6470185192425486,
"acc_stderr": 0.03219002860343247,
"acc_norm": 0.6496484281299122,
"acc_norm_stderr": 0.032831744184837246,
"mc1": 0.423500611995104,
"mc1_stderr": 0.017297421448534727,
"mc2": 0.6006477207402555,
"mc2_stderr": 0.015322634627558495
},
"harness|arc:challenge|25": {
"acc": 0.6399317406143344,
"acc_stderr": 0.014027516814585188,
"acc_norm": 0.6638225255972696,
"acc_norm_stderr": 0.01380485502620576
},
"harness|hellaswag|10": {
"acc": 0.6702848038239394,
"acc_stderr": 0.00469148881303216,
"acc_norm": 0.8556064528978291,
"acc_norm_stderr": 0.0035076999350742376
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926605,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926605
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488583,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488583
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493864,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.035868792800803406,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.035868792800803406
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287533,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287533
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7516129032258064,
"acc_stderr": 0.024580028921481003,
"acc_norm": 0.7516129032258064,
"acc_norm_stderr": 0.024580028921481003
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131143,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131143
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.025845017986926917,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.025845017986926917
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944856,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944856
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6681614349775785,
"acc_stderr": 0.03160295143776679,
"acc_norm": 0.6681614349775785,
"acc_norm_stderr": 0.03160295143776679
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596913,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596913
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281365,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993457,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993457
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.37318435754189944,
"acc_stderr": 0.016175692013381957,
"acc_norm": 0.37318435754189944,
"acc_norm_stderr": 0.016175692013381957
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7041800643086816,
"acc_stderr": 0.025922371788818767,
"acc_norm": 0.7041800643086816,
"acc_norm_stderr": 0.025922371788818767
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7345679012345679,
"acc_stderr": 0.024569223600460845,
"acc_norm": 0.7345679012345679,
"acc_norm_stderr": 0.024569223600460845
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.01899970738316267,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.01899970738316267
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.0282638899437846,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.0282638899437846
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.025196929874827072,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.025196929874827072
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160886,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160886
},
"harness|truthfulqa:mc|0": {
"mc1": 0.423500611995104,
"mc1_stderr": 0.017297421448534727,
"mc2": 0.6006477207402555,
"mc2_stderr": 0.015322634627558495
},
"harness|winogrande|5": {
"acc": 0.7908445146014207,
"acc_stderr": 0.011430450045881578
},
"harness|gsm8k|5": {
"acc": 0.5572403335860501,
"acc_stderr": 0.013681937191764627
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
hooking-dev/Hebrew_QA | ---
license: apache-2.0
---
|
bdsaglam/web_nlg-erx | ---
dataset_info:
features:
- name: text
dtype: string
- name: triplets
sequence: string
splits:
- name: train
num_bytes: 9341180
num_examples: 35426
- name: dev
num_bytes: 1181212
num_examples: 4464
- name: test
num_bytes: 2179352
num_examples: 7305
download_size: 2613682
dataset_size: 12701744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
|
tyzhu/random_letter_same_length_find_passage_train30_eval40_num | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 32602
num_examples: 100
- name: validation
num_bytes: 15422
num_examples: 40
download_size: 34737
dataset_size: 48024
---
# Dataset Card for "random_letter_same_length_find_passage_train30_eval40_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nilamasrouri98/finetune | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4201526
num_examples: 1000
download_size: 2247084
dataset_size: 4201526
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-6598b244-9392-4c7f-a1a9-2f5ffa8b50f8-3230 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Multimodal-Fatima/VQAv2_test_no_image_split_6 | ---
dataset_info:
features:
- name: question_type
dtype: string
- name: multiple_choice_answer
dtype: string
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_ViT_L_14_wo_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai
sequence: string
- name: clip_tags_LAION_ViT_bigG_14_2B_with_openai
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
splits:
- name: test
num_bytes: 2205691499
num_examples: 44779
download_size: 577059344
dataset_size: 2205691499
---
# Dataset Card for "VQAv2_test_no_image_split_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
allandclive/luganda_bible_audio_100Hrs | ---
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- lg
---
# Luganda Bible Audio 100Hrs
- Audio split by chapters of Old & New Testament & Transcripts
- Format: 64kps, MP3
- Requires further splitting of the audio into smaller chunks/splits (https://github.com/facebookresearch/fairseq/tree/main/examples/mms/data_prep)
- Audio sourced from https://www.faithcomesbyhearing.com/ |
Babypotatotang/logo-splitted | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 321359468.84
num_examples: 24080
- name: test
num_bytes: 82173680.498
num_examples: 6021
download_size: 266044858
dataset_size: 403533149.33799994
---
# Dataset Card for "logo-splitted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_mrpc_shadow_pronouns | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 25681
num_examples: 92
- name: train
num_bytes: 63262
num_examples: 213
- name: validation
num_bytes: 6075
num_examples: 20
download_size: 74589
dataset_size: 95018
---
# Dataset Card for "MULTI_VALUE_mrpc_shadow_pronouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_wnli_too_sub | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1353
num_examples: 7
- name: test
num_bytes: 2001
num_examples: 7
- name: train
num_bytes: 11079
num_examples: 62
download_size: 14193
dataset_size: 14433
---
# Dataset Card for "MULTI_VALUE_wnli_too_sub"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jarrydmartinx/deepsurv_gbsg | ---
dataset_info:
features:
- name: x0
dtype: float32
- name: x1
dtype: float32
- name: x2
dtype: float32
- name: x3
dtype: float32
- name: x4
dtype: float32
- name: x5
dtype: float32
- name: x6
dtype: float32
- name: event_time
dtype: float32
- name: event_indicator
dtype: int32
splits:
- name: train
num_bytes: 80352
num_examples: 2232
download_size: 34711
dataset_size: 80352
---
# Dataset Card for "deepsurv_gbsg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GHOFRANEE/ALCORA | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15349
num_examples: 6
download_size: 20916
dataset_size: 15349
---
# Dataset Card for "ALCORA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sidharthkr/Vizbiz | ---
license: other
---
|
huggingartists/krechet | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/krechet"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.03618 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/61181ccb60b6a0e1e7f8fb8ae2a2ab0a.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/krechet">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Krechet</div>
<a href="https://genius.com/artists/krechet">
<div style="text-align: center; font-size: 14px;">@krechet</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/krechet).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/krechet")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|5| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/krechet")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
CyberHarem/parvati_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of parvati/パールヴァティー/帕尔瓦蒂 (Fate/Grand Order)
This is the dataset of parvati/パールヴァティー/帕尔瓦蒂 (Fate/Grand Order), containing 125 images and their tags.
The core tags of this character are `purple_hair, long_hair, purple_eyes, breasts, earrings, ribbon, hair_ribbon, large_breasts, hair_ornament, hair_flower`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 125 | 145.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/parvati_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 125 | 131.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/parvati_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 272 | 243.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/parvati_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/parvati_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bracelet, solo, lotus, necklace, polearm, looking_at_viewer, smile, holding, short_sleeves, blue_dress |
| 1 | 10 |  |  |  |  |  | 1girl, crop_top, looking_at_viewer, midriff, navel, necklace, solo, skirt, blue_shirt, short_sleeves, smile, circlet, polearm, white_background, collarbone, holding, simple_background, cape |
| 2 | 5 |  |  |  |  |  | 1girl, necklace, solo, bracelet, looking_at_viewer, smile, blue_dress, flower, petals, short_sleeves, circlet |
| 3 | 12 |  |  |  |  |  | homurahara_academy_school_uniform, red_ribbon, white_shirt, brown_vest, smile, 1girl, jewelry, long_sleeves, looking_at_viewer, solo, blush, collared_shirt, closed_mouth, neck_ribbon, simple_background, skirt, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bracelet | solo | lotus | necklace | polearm | looking_at_viewer | smile | holding | short_sleeves | blue_dress | crop_top | midriff | navel | skirt | blue_shirt | circlet | white_background | collarbone | simple_background | cape | flower | petals | homurahara_academy_school_uniform | red_ribbon | white_shirt | brown_vest | jewelry | long_sleeves | blush | collared_shirt | closed_mouth | neck_ribbon | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-------|:--------|:-----------|:----------|:--------------------|:--------|:----------|:----------------|:-------------|:-----------|:----------|:--------|:--------|:-------------|:----------|:-------------------|:-------------|:--------------------|:-------|:---------|:---------|:------------------------------------|:-------------|:--------------|:-------------|:----------|:---------------|:--------|:-----------------|:---------------|:--------------|:-------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | X | | X | X | | X | X | | | | | | X | | | | | X | X | | | | | | | | | | | |
| 3 | 12 |  |  |  |  |  | X | | X | | | | X | X | | | | | | | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X |
|
Cartinoe5930/Politifact_fake_news | ---
license: unknown
---
|
DataStudio/OCR_document_blackSeal | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 376264737.75
num_examples: 71970
download_size: 356158029
dataset_size: 376264737.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: other
task_categories:
- image-to-text
language:
- vi
pretty_name: OCR black seal document
size_categories:
- 100K<n<1M
---
# Dataset Card for "OCR_document_blackMark"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Abhaykoul__HelpingAI-3B | ---
pretty_name: Evaluation run of Abhaykoul/HelpingAI-3B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Abhaykoul/HelpingAI-3B](https://huggingface.co/Abhaykoul/HelpingAI-3B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Abhaykoul__HelpingAI-3B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-23T12:32:30.978679](https://huggingface.co/datasets/open-llm-leaderboard/details_Abhaykoul__HelpingAI-3B/blob/main/results_2024-03-23T12-32-30.978679.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.4720493443416543,\n\
\ \"acc_stderr\": 0.03462534722208238,\n \"acc_norm\": 0.4739086571342648,\n\
\ \"acc_norm_stderr\": 0.03534085984323595,\n \"mc1\": 0.390452876376989,\n\
\ \"mc1_stderr\": 0.01707823074343145,\n \"mc2\": 0.556234093472517,\n\
\ \"mc2_stderr\": 0.01598283488815226\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4786689419795222,\n \"acc_stderr\": 0.014598087973127106,\n\
\ \"acc_norm\": 0.5059726962457338,\n \"acc_norm_stderr\": 0.014610348300255793\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5892252539334794,\n\
\ \"acc_stderr\": 0.004909689876342045,\n \"acc_norm\": 0.7663811989643498,\n\
\ \"acc_norm_stderr\": 0.004222676709104566\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4,\n \
\ \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.4,\n \"\
acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.040601270352363966,\n\
\ \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.040601270352363966\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n\
\ \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \
\ \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5283018867924528,\n \"acc_stderr\": 0.0307235352490061,\n\
\ \"acc_norm\": 0.5283018867924528,\n \"acc_norm_stderr\": 0.0307235352490061\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4791666666666667,\n\
\ \"acc_stderr\": 0.04177578950739993,\n \"acc_norm\": 0.4791666666666667,\n\
\ \"acc_norm_stderr\": 0.04177578950739993\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n\
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4393063583815029,\n\
\ \"acc_stderr\": 0.03784271932887467,\n \"acc_norm\": 0.4393063583815029,\n\
\ \"acc_norm_stderr\": 0.03784271932887467\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319617,\n\
\ \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319617\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.39148936170212767,\n \"acc_stderr\": 0.03190701242326812,\n\
\ \"acc_norm\": 0.39148936170212767,\n \"acc_norm_stderr\": 0.03190701242326812\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4689655172413793,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.4689655172413793,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.31216931216931215,\n \"acc_stderr\": 0.0238652068369726,\n \"\
acc_norm\": 0.31216931216931215,\n \"acc_norm_stderr\": 0.0238652068369726\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n\
\ \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n\
\ \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5096774193548387,\n\
\ \"acc_stderr\": 0.028438677998909548,\n \"acc_norm\": 0.5096774193548387,\n\
\ \"acc_norm_stderr\": 0.028438677998909548\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.33004926108374383,\n \"acc_stderr\": 0.03308530426228258,\n\
\ \"acc_norm\": 0.33004926108374383,\n \"acc_norm_stderr\": 0.03308530426228258\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\"\
: 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.03825460278380026,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.03825460278380026\n },\n\
\ \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5404040404040404,\n\
\ \"acc_stderr\": 0.035507024651313425,\n \"acc_norm\": 0.5404040404040404,\n\
\ \"acc_norm_stderr\": 0.035507024651313425\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\"\
: {\n \"acc\": 0.6424870466321243,\n \"acc_stderr\": 0.034588160421810114,\n\
\ \"acc_norm\": 0.6424870466321243,\n \"acc_norm_stderr\": 0.034588160421810114\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4076923076923077,\n \"acc_stderr\": 0.024915243985987833,\n\
\ \"acc_norm\": 0.4076923076923077,\n \"acc_norm_stderr\": 0.024915243985987833\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.24444444444444444,\n \"acc_stderr\": 0.02620276653465215,\n \
\ \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.02620276653465215\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3865546218487395,\n \"acc_stderr\": 0.0316314580755238,\n \
\ \"acc_norm\": 0.3865546218487395,\n \"acc_norm_stderr\": 0.0316314580755238\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.653211009174312,\n \"acc_stderr\": 0.02040609710409302,\n \"acc_norm\"\
: 0.653211009174312,\n \"acc_norm_stderr\": 0.02040609710409302\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n\
\ \"acc_stderr\": 0.03225941352631295,\n \"acc_norm\": 0.33796296296296297,\n\
\ \"acc_norm_stderr\": 0.03225941352631295\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.6029411764705882,\n \"acc_stderr\": 0.034341311647191286,\n\
\ \"acc_norm\": 0.6029411764705882,\n \"acc_norm_stderr\": 0.034341311647191286\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6751054852320675,\n \"acc_stderr\": 0.03048603938910531,\n \
\ \"acc_norm\": 0.6751054852320675,\n \"acc_norm_stderr\": 0.03048603938910531\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5426008968609866,\n\
\ \"acc_stderr\": 0.033435777055830646,\n \"acc_norm\": 0.5426008968609866,\n\
\ \"acc_norm_stderr\": 0.033435777055830646\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5038167938931297,\n \"acc_stderr\": 0.043851623256015534,\n\
\ \"acc_norm\": 0.5038167938931297,\n \"acc_norm_stderr\": 0.043851623256015534\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6198347107438017,\n \"acc_stderr\": 0.04431324501968432,\n \"\
acc_norm\": 0.6198347107438017,\n \"acc_norm_stderr\": 0.04431324501968432\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5648148148148148,\n\
\ \"acc_stderr\": 0.04792898170907061,\n \"acc_norm\": 0.5648148148148148,\n\
\ \"acc_norm_stderr\": 0.04792898170907061\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.49693251533742333,\n \"acc_stderr\": 0.03928297078179663,\n\
\ \"acc_norm\": 0.49693251533742333,\n \"acc_norm_stderr\": 0.03928297078179663\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n\
\ \"acc_stderr\": 0.04521829902833585,\n \"acc_norm\": 0.3482142857142857,\n\
\ \"acc_norm_stderr\": 0.04521829902833585\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6407766990291263,\n \"acc_stderr\": 0.04750458399041696,\n\
\ \"acc_norm\": 0.6407766990291263,\n \"acc_norm_stderr\": 0.04750458399041696\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6923076923076923,\n\
\ \"acc_stderr\": 0.030236389942173102,\n \"acc_norm\": 0.6923076923076923,\n\
\ \"acc_norm_stderr\": 0.030236389942173102\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \
\ \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.5938697318007663,\n\
\ \"acc_stderr\": 0.017562037406478916,\n \"acc_norm\": 0.5938697318007663,\n\
\ \"acc_norm_stderr\": 0.017562037406478916\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.4682080924855491,\n \"acc_stderr\": 0.026864624366756656,\n\
\ \"acc_norm\": 0.4682080924855491,\n \"acc_norm_stderr\": 0.026864624366756656\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27150837988826815,\n\
\ \"acc_stderr\": 0.01487425216809527,\n \"acc_norm\": 0.27150837988826815,\n\
\ \"acc_norm_stderr\": 0.01487425216809527\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.46405228758169936,\n \"acc_stderr\": 0.02855582751652878,\n\
\ \"acc_norm\": 0.46405228758169936,\n \"acc_norm_stderr\": 0.02855582751652878\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.47266881028938906,\n\
\ \"acc_stderr\": 0.028355633568328174,\n \"acc_norm\": 0.47266881028938906,\n\
\ \"acc_norm_stderr\": 0.028355633568328174\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5216049382716049,\n \"acc_stderr\": 0.027794760105008746,\n\
\ \"acc_norm\": 0.5216049382716049,\n \"acc_norm_stderr\": 0.027794760105008746\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028121636040639882,\n \
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028121636040639882\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3578878748370274,\n\
\ \"acc_stderr\": 0.012243563850490302,\n \"acc_norm\": 0.3578878748370274,\n\
\ \"acc_norm_stderr\": 0.012243563850490302\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.39338235294117646,\n \"acc_stderr\": 0.029674288281311172,\n\
\ \"acc_norm\": 0.39338235294117646,\n \"acc_norm_stderr\": 0.029674288281311172\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.43790849673202614,\n \"acc_stderr\": 0.020071257886886525,\n \
\ \"acc_norm\": 0.43790849673202614,\n \"acc_norm_stderr\": 0.020071257886886525\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n\
\ \"acc_stderr\": 0.04673752333670237,\n \"acc_norm\": 0.6090909090909091,\n\
\ \"acc_norm_stderr\": 0.04673752333670237\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5061224489795918,\n \"acc_stderr\": 0.03200682020163907,\n\
\ \"acc_norm\": 0.5061224489795918,\n \"acc_norm_stderr\": 0.03200682020163907\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6517412935323383,\n\
\ \"acc_stderr\": 0.033687874661154596,\n \"acc_norm\": 0.6517412935323383,\n\
\ \"acc_norm_stderr\": 0.033687874661154596\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42168674698795183,\n\
\ \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.42168674698795183,\n\
\ \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6140350877192983,\n \"acc_stderr\": 0.03733756969066165,\n\
\ \"acc_norm\": 0.6140350877192983,\n \"acc_norm_stderr\": 0.03733756969066165\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.390452876376989,\n\
\ \"mc1_stderr\": 0.01707823074343145,\n \"mc2\": 0.556234093472517,\n\
\ \"mc2_stderr\": 0.01598283488815226\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6779794790844514,\n \"acc_stderr\": 0.01313207020207106\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.36087945413191813,\n \
\ \"acc_stderr\": 0.01322862675392514\n }\n}\n```"
repo_url: https://huggingface.co/Abhaykoul/HelpingAI-3B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|arc:challenge|25_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|gsm8k|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hellaswag|10_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T12-32-30.978679.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-23T12-32-30.978679.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- '**/details_harness|winogrande|5_2024-03-23T12-32-30.978679.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-23T12-32-30.978679.parquet'
- config_name: results
data_files:
- split: 2024_03_23T12_32_30.978679
path:
- results_2024-03-23T12-32-30.978679.parquet
- split: latest
path:
- results_2024-03-23T12-32-30.978679.parquet
---
# Dataset Card for Evaluation run of Abhaykoul/HelpingAI-3B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Abhaykoul/HelpingAI-3B](https://huggingface.co/Abhaykoul/HelpingAI-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Abhaykoul__HelpingAI-3B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-23T12:32:30.978679](https://huggingface.co/datasets/open-llm-leaderboard/details_Abhaykoul__HelpingAI-3B/blob/main/results_2024-03-23T12-32-30.978679.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.4720493443416543,
"acc_stderr": 0.03462534722208238,
"acc_norm": 0.4739086571342648,
"acc_norm_stderr": 0.03534085984323595,
"mc1": 0.390452876376989,
"mc1_stderr": 0.01707823074343145,
"mc2": 0.556234093472517,
"mc2_stderr": 0.01598283488815226
},
"harness|arc:challenge|25": {
"acc": 0.4786689419795222,
"acc_stderr": 0.014598087973127106,
"acc_norm": 0.5059726962457338,
"acc_norm_stderr": 0.014610348300255793
},
"harness|hellaswag|10": {
"acc": 0.5892252539334794,
"acc_stderr": 0.004909689876342045,
"acc_norm": 0.7663811989643498,
"acc_norm_stderr": 0.004222676709104566
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5283018867924528,
"acc_stderr": 0.0307235352490061,
"acc_norm": 0.5283018867924528,
"acc_norm_stderr": 0.0307235352490061
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4791666666666667,
"acc_stderr": 0.04177578950739993,
"acc_norm": 0.4791666666666667,
"acc_norm_stderr": 0.04177578950739993
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4393063583815029,
"acc_stderr": 0.03784271932887467,
"acc_norm": 0.4393063583815029,
"acc_norm_stderr": 0.03784271932887467
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319617,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319617
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.39148936170212767,
"acc_stderr": 0.03190701242326812,
"acc_norm": 0.39148936170212767,
"acc_norm_stderr": 0.03190701242326812
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.041424397194893624,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.041424397194893624
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4689655172413793,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.4689655172413793,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.31216931216931215,
"acc_stderr": 0.0238652068369726,
"acc_norm": 0.31216931216931215,
"acc_norm_stderr": 0.0238652068369726
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5096774193548387,
"acc_stderr": 0.028438677998909548,
"acc_norm": 0.5096774193548387,
"acc_norm_stderr": 0.028438677998909548
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.33004926108374383,
"acc_stderr": 0.03308530426228258,
"acc_norm": 0.33004926108374383,
"acc_norm_stderr": 0.03308530426228258
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6,
"acc_stderr": 0.03825460278380026,
"acc_norm": 0.6,
"acc_norm_stderr": 0.03825460278380026
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5404040404040404,
"acc_stderr": 0.035507024651313425,
"acc_norm": 0.5404040404040404,
"acc_norm_stderr": 0.035507024651313425
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6424870466321243,
"acc_stderr": 0.034588160421810114,
"acc_norm": 0.6424870466321243,
"acc_norm_stderr": 0.034588160421810114
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4076923076923077,
"acc_stderr": 0.024915243985987833,
"acc_norm": 0.4076923076923077,
"acc_norm_stderr": 0.024915243985987833
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.24444444444444444,
"acc_stderr": 0.02620276653465215,
"acc_norm": 0.24444444444444444,
"acc_norm_stderr": 0.02620276653465215
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3865546218487395,
"acc_stderr": 0.0316314580755238,
"acc_norm": 0.3865546218487395,
"acc_norm_stderr": 0.0316314580755238
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.653211009174312,
"acc_stderr": 0.02040609710409302,
"acc_norm": 0.653211009174312,
"acc_norm_stderr": 0.02040609710409302
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.33796296296296297,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.33796296296296297,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6029411764705882,
"acc_stderr": 0.034341311647191286,
"acc_norm": 0.6029411764705882,
"acc_norm_stderr": 0.034341311647191286
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6751054852320675,
"acc_stderr": 0.03048603938910531,
"acc_norm": 0.6751054852320675,
"acc_norm_stderr": 0.03048603938910531
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5426008968609866,
"acc_stderr": 0.033435777055830646,
"acc_norm": 0.5426008968609866,
"acc_norm_stderr": 0.033435777055830646
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5038167938931297,
"acc_stderr": 0.043851623256015534,
"acc_norm": 0.5038167938931297,
"acc_norm_stderr": 0.043851623256015534
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6198347107438017,
"acc_stderr": 0.04431324501968432,
"acc_norm": 0.6198347107438017,
"acc_norm_stderr": 0.04431324501968432
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5648148148148148,
"acc_stderr": 0.04792898170907061,
"acc_norm": 0.5648148148148148,
"acc_norm_stderr": 0.04792898170907061
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.49693251533742333,
"acc_stderr": 0.03928297078179663,
"acc_norm": 0.49693251533742333,
"acc_norm_stderr": 0.03928297078179663
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.04521829902833585,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.04521829902833585
},
"harness|hendrycksTest-management|5": {
"acc": 0.6407766990291263,
"acc_stderr": 0.04750458399041696,
"acc_norm": 0.6407766990291263,
"acc_norm_stderr": 0.04750458399041696
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6923076923076923,
"acc_stderr": 0.030236389942173102,
"acc_norm": 0.6923076923076923,
"acc_norm_stderr": 0.030236389942173102
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.5938697318007663,
"acc_stderr": 0.017562037406478916,
"acc_norm": 0.5938697318007663,
"acc_norm_stderr": 0.017562037406478916
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4682080924855491,
"acc_stderr": 0.026864624366756656,
"acc_norm": 0.4682080924855491,
"acc_norm_stderr": 0.026864624366756656
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27150837988826815,
"acc_stderr": 0.01487425216809527,
"acc_norm": 0.27150837988826815,
"acc_norm_stderr": 0.01487425216809527
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.46405228758169936,
"acc_stderr": 0.02855582751652878,
"acc_norm": 0.46405228758169936,
"acc_norm_stderr": 0.02855582751652878
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.47266881028938906,
"acc_stderr": 0.028355633568328174,
"acc_norm": 0.47266881028938906,
"acc_norm_stderr": 0.028355633568328174
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5216049382716049,
"acc_stderr": 0.027794760105008746,
"acc_norm": 0.5216049382716049,
"acc_norm_stderr": 0.027794760105008746
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028121636040639882,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028121636040639882
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3578878748370274,
"acc_stderr": 0.012243563850490302,
"acc_norm": 0.3578878748370274,
"acc_norm_stderr": 0.012243563850490302
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.39338235294117646,
"acc_stderr": 0.029674288281311172,
"acc_norm": 0.39338235294117646,
"acc_norm_stderr": 0.029674288281311172
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.43790849673202614,
"acc_stderr": 0.020071257886886525,
"acc_norm": 0.43790849673202614,
"acc_norm_stderr": 0.020071257886886525
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6090909090909091,
"acc_stderr": 0.04673752333670237,
"acc_norm": 0.6090909090909091,
"acc_norm_stderr": 0.04673752333670237
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5061224489795918,
"acc_stderr": 0.03200682020163907,
"acc_norm": 0.5061224489795918,
"acc_norm_stderr": 0.03200682020163907
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6517412935323383,
"acc_stderr": 0.033687874661154596,
"acc_norm": 0.6517412935323383,
"acc_norm_stderr": 0.033687874661154596
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42168674698795183,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.42168674698795183,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6140350877192983,
"acc_stderr": 0.03733756969066165,
"acc_norm": 0.6140350877192983,
"acc_norm_stderr": 0.03733756969066165
},
"harness|truthfulqa:mc|0": {
"mc1": 0.390452876376989,
"mc1_stderr": 0.01707823074343145,
"mc2": 0.556234093472517,
"mc2_stderr": 0.01598283488815226
},
"harness|winogrande|5": {
"acc": 0.6779794790844514,
"acc_stderr": 0.01313207020207106
},
"harness|gsm8k|5": {
"acc": 0.36087945413191813,
"acc_stderr": 0.01322862675392514
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
andreped/LyNoS | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- medical
pretty_name: AeroPath
size_categories:
- 1B<n<10B
---
<div align="center">
<h1 align="center">🫁 LyNoS 🤗</h1>
<h3 align="center">A multilabel lymph node segmentation dataset from contrast CT</h3>
**LyNoS** was developed by SINTEF Medical Image Analysis to accelerate medical AI research.
</div>
## [Brief intro](https://github.com/raidionics/LyNoS#brief-intro)
This repository contains the LyNoS dataset described in ["_Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding_"](https://doi.org/10.1080/21681163.2022.2043778).
The dataset has now also been uploaded to Zenodo and the Hugging Face Hub enabling users to more easily access the data through Python API.
We have also developed a web demo to enable others to easily test the pretrained model presented in the paper. The application was developed using [Gradio](https://www.gradio.app) for the frontend and the segmentation is performed using the [Raidionics](https://raidionics.github.io/) backend.
## [Dataset](https://github.com/raidionics/LyNoS#data) <a href="https://colab.research.google.com/gist/andreped/274bf953771059fd9537877404369bed/lynos-load-dataset-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### [Accessing dataset](https://github.com/raidionics/LyNoS#accessing-dataset)
The dataset contains 15 CTs with corresponding lymph nodes, azygos, esophagus, and subclavian carotid arteries. The folder structure is described below.
The easiest way to access the data is through Python with Hugging Face's [datasets](https://pypi.org/project/datasets/) package:
```
from datasets import load_dataset
# downloads data from Zenodo through the Hugging Face hub
# - might take several minutes (~5 minutes in CoLab)
dataset = load_dataset("andreped/LyNoS")
print(dataset)
# list paths of all available patients and corresponding features (ct/lymphnodes/azygos/brachiocephalicveins/esophagus/subclaviancarotidarteries)
for d in dataset["test"]:
print(d)
```
A detailed interactive demo on how to load and work with the data can be seen on CoLab. Click the CoLab badge <a href="https://colab.research.google.com/gist/andreped/274bf953771059fd9537877404369bed/lynos-load-dataset-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> to see the notebook or alternatively click [here](https://github.com/raidionics/LyNoS/blob/main/notebooks/lynos-load-dataset-example.ipynb) to see it on GitHub.
### [Dataset structure](https://github.com/raidionics/LyNoS#dataset-structure)
```
└── LyNoS.zip
├── stations_sto.csv
└── LyNoS/
├── Pat1/
│ ├── pat1_data.nii.gz
│ ├── pat1_labels_Azygos.nii.gz
│ ├── pat1_labels_Esophagus.nii.gz
│ ├── pat1_labels_LymphNodes.nii.gz
│ └── pat1_labels_SubCarArt.nii.gz
├── [...]
└── Pat15/
├── pat15_data.nii.gz
├── pat15_labels_Azygos.nii.gz
├── pat15_labels_Esophagus.nii.gz
├── pat15_labels_LymphNodes.nii.gz
└── pat15_labels_SubCarArt.nii.gz
```
### [NIH Dataset Completion](https://github.com/raidionics/LyNoS#nih-dataset-completion)
A larger dataset made of 90 patients featuring enlarged lymph nodes has also been made available by the National Institutes of Health, and is available for download on the official [web-page](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=19726546).
As a supplement to this dataset, lymph nodes segmentation masks have been refined for all patients and stations have been manually assigned to each, available [here](https://drive.google.com/uc?id=1iVCnZc1GHwtx9scyAXdANqz2HdQArTHn).
## [Demo](https://github.com/raidionics/LyNoS#demo) <a target="_blank" href="https://huggingface.co/spaces/andreped/LyNoS"><img src="https://img.shields.io/badge/🤗%20Hugging%20Face-Spaces-yellow.svg"></a>
To access the live demo, click on the `Hugging Face` badge above. Below is a snapshot of the current state of the demo app.
<img width="1400" alt="Screenshot 2023-11-09 at 20 53 29" src="https://github.com/raidionics/LyNoS/assets/29090665/ce661da0-d172-4481-b9b5-8b3e29a9fc1f">
## [Development](https://github.com/raidionics/LyNoS#development)
### [Docker](https://github.com/raidionics/LyNoS#docker)
Alternatively, you can deploy the software locally. Note that this is only relevant for development purposes. Simply dockerize the app and run it:
```
docker build -t LyNoS .
docker run -it -p 7860:7860 LyNoS
```
Then open `http://127.0.0.1:7860` in your favourite internet browser to view the demo.
### [Python](https://github.com/raidionics/LyNoS#python)
It is also possible to run the app locally without Docker. Just setup a virtual environment and run the app.
Note that the current working directory would need to be adjusted based on where `LyNoS` is located on disk.
```
git clone https://github.com/raidionics/LyNoS.git
cd LyNoS/
virtualenv -python3 venv --clear
source venv/bin/activate
pip install -r ./demo/requirements.txt
python demo/app.py --cwd ./
```
## [Citation](https://github.com/raidionics/LyNoS#citation)
If you found the dataset and/or web application relevant in your research, please cite the following reference:
```
@article{bouget2021mediastinal,
author = {David Bouget and André Pedersen and Johanna Vanel and Haakon O. Leira and Thomas Langø},
title = {Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding},
journal = {Computer Methods in Biomechanics and Biomedical Engineering: Imaging \& Visualization},
volume = {0},
number = {0},
pages = {1-15},
year = {2022},
publisher = {Taylor & Francis},
doi = {10.1080/21681163.2022.2043778},
URL = {https://doi.org/10.1080/21681163.2022.2043778},
eprint = {https://doi.org/10.1080/21681163.2022.2043778}
}
```
## [License](https://github.com/raidionics/LyNoS#license)
The code in this repository is released under [MIT license](https://github.com/raidionics/LyNoS/blob/main/LICENSE). |
amogh-sinha/documents | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 24588.0
num_examples: 3
- name: test
num_bytes: 8196
num_examples: 1
download_size: 20270
dataset_size: 32784.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "documents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
neil-code/autotrain-data-img-classification | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: img-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project img-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<222x163 RGB PIL image>",
"target": 0
},
{
"image": "<222x163 RGB PIL image>",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['elf', 'goblin', 'knight', 'zombie'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1160 |
| valid | 291 |
|
Davi2586/Old_Dave_Mustaine_Ai_Model_New_Dataset | ---
license: apache-2.0
---
|
CyberHarem/togawa_sakiko_bangdreamitsmygo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Togawa Sakiko
This is the dataset of Togawa Sakiko, containing 150 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 150 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 328 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 150 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 150 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 150 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 150 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 150 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 328 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 328 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 328 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
AliaeAI/wikiner_fr_LOC_PER | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-LOC
'2': I-LOC
'3': B-PER
'4': I-PER
splits:
- name: train
num_bytes: 53340203
num_examples: 120682
- name: validation
num_bytes: 5847428
num_examples: 13410
download_size: 14542966
dataset_size: 59187631
---
# Dataset Card for "wikiner_fr_LOC_PER"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml/DAWQAS | ---
dataset_info:
features:
- name: QID
dtype: string
- name: Site_id
dtype: string
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Answer1
dtype: string
- name: Answer2
dtype: string
- name: Answer3
dtype: string
- name: Answer4
dtype: string
- name: Answer5
dtype: string
- name: Answer6
dtype: string
- name: Answer7
dtype: string
- name: Answer8
dtype: string
- name: Answer9
dtype: string
- name: Answer10
dtype: string
- name: Answer11
dtype: string
- name: Original_Category
dtype: string
- name: Author
dtype: string
- name: Date
dtype: string
- name: Site
dtype: string
- name: Year
dtype: string
splits:
- name: train
num_bytes: 22437661
num_examples: 3209
download_size: 10844359
dataset_size: 22437661
---
# Dataset Card for "DAWQAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zumbul/Kralj | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 33607813.0
num_examples: 18
download_size: 30423947
dataset_size: 33607813.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chuyin0321/revenue-estimate-stocks | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: current_qtr
dtype: string
- name: no_of_analysts_current_qtr
dtype: int64
- name: next_qtr
dtype: string
- name: no_of_analysts_next_qtr
dtype: int64
- name: current_year
dtype: int64
- name: no_of_analysts_current_year
dtype: int64
- name: next_year
dtype: int64
- name: no_of_analysts_next_year
dtype: int64
- name: avg_estimate_current_qtr
dtype: string
- name: avg_estimate_next_qtr
dtype: string
- name: avg_estimate_current_year
dtype: string
- name: avg_estimate_next_year
dtype: string
- name: low_estimate_current_qtr
dtype: string
- name: low_estimate_next_qtr
dtype: string
- name: low_estimate_current_year
dtype: string
- name: low_estimate_next_year
dtype: string
- name: high_estimate_current_qtr
dtype: string
- name: high_estimate_next_qtr
dtype: string
- name: high_estimate_current_year
dtype: string
- name: high_estimate_next_year
dtype: string
- name: year_ago_sales_current_qtr
dtype: string
- name: year_ago_sales_next_qtr
dtype: string
- name: year_ago_sales_current_year
dtype: string
- name: year_ago_sales_next_year
dtype: string
- name: sales_growth_yearest_current_qtr
dtype: string
- name: sales_growth_yearest_next_qtr
dtype: string
- name: sales_growth_yearest_current_year
dtype: string
- name: sales_growth_yearest_next_year
dtype: string
splits:
- name: train
num_bytes: 375640
num_examples: 1356
download_size: 198303
dataset_size: 375640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "revenue-estimate-stocks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gguichard/wsd_myriade_synth_data_gpt4turbo_val | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 466852
num_examples: 676
download_size: 0
dataset_size: 466852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wsd_myriade_synth_data_gpt4turbo_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
uzumakiitachi/dataaaa | ---
license: other
---
|
micsell/hebrew_kan_sentence20000 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: id
dtype: string
- name: language
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1862352910.0
num_examples: 10000
download_size: 1861597061
dataset_size: 1862352910.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fernanda-dionello/autotrain-data-goodreads_without_bookid | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: goodreads_without_bookid
## Dataset Description
This dataset has been automatically processed by AutoTrain for project goodreads_without_bookid.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 5,
"text": "This book was absolutely ADORABLE!!!!!!!!!!! It was an awesome light and FUN read. \n I loved the characters but I absolutely LOVED Cam!!!!!!!!!!!! Major Swoooon Worthy! J \n You've been checking me out haven't you? In-between your flaming insults? I feel like man candy. \n Seriously between being HOT FUNNY and OH SO VERY ADORABLE Cam was the perfect catch!! \n I'm not going out with you Cam. \n I didn't ask you at this moment now did I One side of his lips curved up. But you will eventually. \n You're delusional \n I'm determined. \n More like annoying. \n Most would say amazing. \n Cam and Avery's relationship is tough due to the secrets she keeps but he is the perfect match for breaking her out of her shell and facing her fears. \n This book is definitely a MUST READ. \n Trust me when I say this YOU will not regret it! \n www.Jenreadit.com"
},
{
"target": 4,
"text": "3.5 stars! \n Abbi Glines' books are a guilty pleasure for me. I love the Southern charm the sexy boys and the beautiful sweet girls. When You're Back is the second book in Reese and Mase's story and other characters from my other favorite books all make appearances here. \n I loved River Captain Kipling! This guy is SEXY and broody. He is a bit mysterious and I am really looking forward to reading more about him! \n I can change your world too sweetheart. But I'll wait my turn. \n We also have Mase's cousin Aida here who gives the cold shoulder to sweet loving Reese. I really liked how Reese blossomed in this book and how loving and devoted Mase is to her. He is one of my favorite of Abbi Glines' characters and he is definitely book boyfriend material. Their scenes are touching sexy and sweet just what I expect from the Rosemary Beach series. I liked this book and recommend it if you are looking for a sexy quick summertime read!"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=6, names=['0', '1', '2', '3', '4', '5'], id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2358 |
| valid | 592 |
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267103 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_v5
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-6.7b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_v5
dataset_config: mathemakitten--winobias_antistereotype_test_v5
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
oliveira123321/bruce | ---
license: openrail
---
|
CyberHarem/lily_lipman_birdiewinggolfgirlsstory | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Lily Lipman
This is the dataset of Lily Lipman, containing 200 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
| Name | Images | Download | Description |
|:------------|---------:|:------------------------------------|:-------------------------------------------------------------------------|
| raw | 200 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 478 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| 384x512 | 200 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x512 | 200 | [Download](dataset-512x512.zip) | 512x512 aligned dataset. |
| 512x704 | 200 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x640 | 200 | [Download](dataset-640x640.zip) | 640x640 aligned dataset. |
| 640x880 | 200 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 478 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 478 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-1200 | 478 | [Download](dataset-stage3-1200.zip) | 3-stage cropped dataset with the shorter side not exceeding 1200 pixels. |
|
shravanm/CS673QADataset | ---
license: apache-2.0
---
This Dataset stores the publicly available information for the CS673 course provided by Boston University. |
EleutherAI/quirky_hemisphere | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: float64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 751938
num_examples: 7493
- name: validation
num_bytes: 401388
num_examples: 4000
- name: test
num_bytes: 401091
num_examples: 4000
download_size: 295925
dataset_size: 1554417
---
# Dataset Card for "quirky_hemisphere"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MaryamAlAli/Mixat_Standard | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: transliteration
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 7669915376.156239
num_examples: 4245
- name: test
num_bytes: 1884296464.912762
num_examples: 1062
download_size: 8037228087
dataset_size: 9554211841.069
---
# Dataset Card for "Mixat_Standard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kaitchup/opus-Swedish-to-English | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: validation
num_bytes: 281986
num_examples: 2000
- name: train
num_bytes: 94666227
num_examples: 961164
download_size: 69511177
dataset_size: 94948213
---
# Dataset Card for "opus-Swedish-to-English"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rcds/swiss_criticality_prediction | ---
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: Legal Criticality Prediction
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
---
# Dataset Card for Criticality Prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Legal Criticality Prediction (LCP) is a multilingual, diachronic dataset of 139K Swiss Federal Supreme Court (FSCS) cases annotated with two criticality labels. The bge_label i a binary label (critical, non-critical), while the citation label has 5 classes (critical-1, critical-2, critical-3, critical-4, non-critical). Critical classes of the citation_label are distinct subsets of the critical class of the bge_label. This dataset creates a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
LCP can be used as text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
German (91k), French (33k), Italian (15k)
## Dataset Structure
```
{
"decision_id": "008d8a52-f0ea-4820-a18c-d06066dbb407",
"language": "fr",
"year": "2018",
"chamber": "CH_BGer_004",
"region": "Federation",
"origin_chamber": "338.0",
"origin_court": "127.0",
"origin_canton": "24.0",
"law_area": "civil_law",
"law_sub_area": ,
"bge_label": "critical",
"citation_label": "critical-1",
"facts": "Faits : A. A.a. Le 17 août 2007, C.X._, née le 14 février 1944 et domiciliée...",
"considerations": "Considérant en droit : 1. Interjeté en temps utile (art. 100 al. 1 LTF) par les défendeurs qui ont succombé dans leurs conclusions (art. 76 LTF) contre une décision...",
"rulings": "Par ces motifs, le Tribunal fédéral prononce : 1. Le recours est rejeté. 2. Les frais judiciaires, arrêtés à 10'000 fr., sont mis solidairement à la charge des recourants...",
}
```
### Data Fields
```
decision_id: (str) a unique identifier of the for the document
language: (str) one of (de, fr, it)
year: (int) the publication year
chamber: (str) the chamber of the case
region: (str) the region of the case
origin_chamber: (str) the chamber of the origin case
origin_court: (str) the court of the origin case
origin_canton: (str) the canton of the origin case
law_area: (str) the law area of the case
law_sub_area:(str) the law sub area of the case
bge_label: (str) critical or non-critical
citation_label: (str) critical-1, critical-2, critical-3, critical-4, non-critical
facts: (str) the facts of the case
considerations: (str) the considerations of the case
rulings: (str) the rulings of the case
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
| Language | Subset | Number of Documents (Training/Validation/Test) |
|------------|------------|--------------------------------------------|
| German | **de** | 81'264 (56592 / 19601 / 5071) |
| French | **fr** | 49'354 (29263 / 11117 / 8974) |
| Italian | **it** | 7913 (5220 / 1901 / 792) |
## Dataset Creation
### Curation Rationale
The dataset was created by Stern (2023).
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
bge_label:
1. all bger_references in the bge header were extracted (for bge see rcds/swiss_rulings).
2. bger file_names are compared with the found references
citation_label:
1. count all citations for all bger cases and weight citations
2. divide cited cases in four different classes, depending on amount of citations
#### Who are the annotators?
Stern processed data and introduced bge and citation-label
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset. |
debajyotidatta/biosses | ---
license: gpl-3.0
---
|
vishalkumar012/chatai-llsama2-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jbrophy123/medical_dataset | ---
dataset_info:
features:
- name: chat_sample
dtype: string
- name: dataset_origin
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6253382
num_examples: 5000
download_size: 0
dataset_size: 6253382
---
# Dataset Card for "medical_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Bemeo/wizmap | ---
license: mit
---
|
VXX/sd_images | ---
license: openrail
---
|
joey234/mmlu-high_school_geography-original-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 17823
num_examples: 47
download_size: 14591
dataset_size: 17823
---
# Dataset Card for "mmlu-high_school_geography-original-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alperaktasm/konya-yangin-risk-oranlari | ---
license: cc
---
|
gayanin/clinical-all | ---
dataset_info:
- config_name: default
features:
- name: refs
dtype: string
- name: trans
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1397405
num_examples: 10807
- name: test
num_bytes: 166034
num_examples: 1352
- name: validation
num_bytes: 164660
num_examples: 1348
download_size: 865687
dataset_size: 1728099
- config_name: gcd
features:
- name: key
dtype: 'null'
- name: refs
dtype: 'null'
- name: trans
dtype: 'null'
- name: 'Unnamed: 0'
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
- name: test
num_bytes: 0
num_examples: 0
- name: validation
num_bytes: 0
num_examples: 0
download_size: 4128
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- config_name: gcd
data_files:
- split: train
path: gcd/train-*
- split: test
path: gcd/test-*
- split: validation
path: gcd/validation-*
---
|
lhoestq/test-image-list | ---
dataset_info:
features:
- name: image
list: image
splits:
- name: train
num_bytes: 346275.0
num_examples: 1
download_size: 174383
dataset_size: 346275.0
---
# Dataset Card for "test-image-list"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RUCAIBox/agieval-single-choice | ---
license: mit
configs:
- config_name: aqua-rat
data_files:
- split: dev
path: dev/aqua-rat.jsonl
- split: test
path: test/aqua-rat.jsonl
- config_name: gaokao-biology
data_files:
- split: dev
path: dev/gaokao-biology.jsonl
- split: test
path: test/gaokao-biology.jsonl
- config_name: gaokao-chemistry
data_files:
- split: dev
path: dev/gaokao-chemistry.jsonl
- split: test
path: test/gaokao-chemistry.jsonl
- config_name: gaokao-chinese
data_files:
- split: dev
path: dev/gaokao-chinese.jsonl
- split: test
path: test/gaokao-chinese.jsonl
- config_name: gaokao-english
data_files:
- split: dev
path: dev/gaokao-english.jsonl
- split: test
path: test/gaokao-english.jsonl
- config_name: gaokao-geography
data_files:
- split: dev
path: dev/gaokao-geography.jsonl
- split: test
path: test/gaokao-geography.jsonl
- config_name: gaokao-history
data_files:
- split: dev
path: dev/gaokao-history.jsonl
- split: test
path: test/gaokao-history.jsonl
- config_name: gaokao-mathqa
data_files:
- split: dev
path: dev/gaokao-mathqa.jsonl
- split: test
path: test/gaokao-mathqa.jsonl
- config_name: logiqa-en
data_files:
- split: dev
path: dev/logiqa-en.jsonl
- split: test
path: test/logiqa-en.jsonl
- config_name: logiqa-zh
data_files:
- split: dev
path: dev/logiqa-zh.jsonl
- split: test
path: test/logiqa-zh.jsonl
- config_name: lsat-ar
data_files:
- split: dev
path: dev/lsat-ar.jsonl
- split: test
path: test/lsat-ar.jsonl
- config_name: lsat-lr
data_files:
- split: dev
path: dev/lsat-lr.jsonl
- split: test
path: test/lsat-lr.jsonl
- config_name: lsat-rc
data_files:
- split: dev
path: dev/lsat-rc.jsonl
- split: test
path: test/lsat-rc.jsonl
- config_name: sat-en
data_files:
- split: dev
path: dev/sat-en.jsonl
- split: test
path: test/sat-en.jsonl
- config_name: sat-math
data_files:
- split: dev
path: dev/sat-math.jsonl
- split: test
path: test/sat-math.jsonl
- config_name: sat-en-without-passage
data_files:
- split: dev
path: dev/sat-en-without-passage.jsonl
- split: test
path: test/sat-en-without-passage.jsonl
---
|
MohammedNasri/test_prepared | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 10027780960
num_examples: 10440
download_size: 1513701627
dataset_size: 10027780960
---
# Dataset Card for "test_prepared"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdapterOcean/med_alpaca_standardized_cluster_17_std | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 16537898
num_examples: 27801
download_size: 8526227
dataset_size: 16537898
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_17_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlhappy/CMeEE | ---
dataset_info:
features:
- name: text
dtype: string
- name: ents
list:
- name: indices
sequence: int64
- name: label
dtype: string
- name: score
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8592427
num_examples: 14897
- name: validation
num_bytes: 2851335
num_examples: 4968
download_size: 3572845
dataset_size: 11443762
---
# Dataset Card for "CMeEE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingface/autotrain-data-emotions | Invalid username or password. |
CHENHJDJSD/example_2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 67486109.0
num_examples: 165
- name: test
num_bytes: 75848559.0
num_examples: 185
download_size: 143344539
dataset_size: 143334668.0
---
# Dataset Card for "example_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ssbuild/alpaca_coig | ---
license: apache-2.0
---
|
yyc777/door | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': door_close
'1': door_open
splits:
- name: train
num_bytes: 20825535.0
num_examples: 69
- name: validation
num_bytes: 6062561.0
num_examples: 20
download_size: 6064804
dataset_size: 26888096.0
---
# Dataset Card for "door"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_num_v5_full_recite_ans_sent_no_permute_rerun | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7835731.7738175
num_examples: 4778
- name: validation
num_bytes: 403389
num_examples: 300
download_size: 1417647
dataset_size: 8239120.7738175
---
# Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cris1907/tesis_vf | ---
license: apache-2.0
---
|
CyberHarem/k_pdw_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of k_pdw/K-PDW/KAC-PDW (Girls' Frontline)
This is the dataset of k_pdw/K-PDW/KAC-PDW (Girls' Frontline), containing 18 images and their tags.
The core tags of this character are `purple_eyes, purple_hair, breasts, black_hair, multicolored_hair, small_breasts, hair_bun, cone_hair_bun, double_bun, long_hair, streaked_hair, bangs, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 18 | 26.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/k_pdw_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 18 | 13.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/k_pdw_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 45 | 30.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/k_pdw_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 18 | 23.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/k_pdw_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 45 | 44.16 MiB | [Download](https://huggingface.co/datasets/CyberHarem/k_pdw_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/k_pdw_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, solo, gun, see-through, ass_visible_through_thighs, bare_shoulders, full_body, grin, holding, looking_at_viewer, official_alternate_costume, soda_bottle, teeth, thigh_strap, two-tone_hair, underboob, white_background, bandaid_on_knee, bare_legs, bright_pupils, covered_navel, gloves, leotard, one-piece_swimsuit, sandals |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | gun | see-through | ass_visible_through_thighs | bare_shoulders | full_body | grin | holding | looking_at_viewer | official_alternate_costume | soda_bottle | teeth | thigh_strap | two-tone_hair | underboob | white_background | bandaid_on_knee | bare_legs | bright_pupils | covered_navel | gloves | leotard | one-piece_swimsuit | sandals |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:------|:--------------|:-----------------------------|:-----------------|:------------|:-------|:----------|:--------------------|:-----------------------------|:--------------|:--------|:--------------|:----------------|:------------|:-------------------|:------------------|:------------|:----------------|:----------------|:---------|:----------|:---------------------|:----------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_kfkas__Llama-2-ko-7b-Chat | ---
pretty_name: Evaluation run of kfkas/Llama-2-ko-7b-Chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [kfkas/Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kfkas__Llama-2-ko-7b-Chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T06:20:53.119467](https://huggingface.co/datasets/open-llm-leaderboard/details_kfkas__Llama-2-ko-7b-Chat/blob/main/results_2023-09-18T06-20-53.119467.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.029886744966442953,\n\
\ \"em_stderr\": 0.0017437739254467523,\n \"f1\": 0.11206061241610675,\n\
\ \"f1_stderr\": 0.002589360675643281,\n \"acc\": 0.3406984196130502,\n\
\ \"acc_stderr\": 0.008168649232732146\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.029886744966442953,\n \"em_stderr\": 0.0017437739254467523,\n\
\ \"f1\": 0.11206061241610675,\n \"f1_stderr\": 0.002589360675643281\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01288855193328279,\n \
\ \"acc_stderr\": 0.003106901266499642\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6685082872928176,\n \"acc_stderr\": 0.01323039719896465\n\
\ }\n}\n```"
repo_url: https://huggingface.co/kfkas/Llama-2-ko-7b-Chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|arc:challenge|25_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|arc:challenge|25_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_17T05_11_56.274160
path:
- '**/details_harness|drop|3_2023-09-17T05-11-56.274160.parquet'
- split: 2023_09_18T06_20_53.119467
path:
- '**/details_harness|drop|3_2023-09-18T06-20-53.119467.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T06-20-53.119467.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_17T05_11_56.274160
path:
- '**/details_harness|gsm8k|5_2023-09-17T05-11-56.274160.parquet'
- split: 2023_09_18T06_20_53.119467
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-20-53.119467.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T06-20-53.119467.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hellaswag|10_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hellaswag|10_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T10:54:54.901743.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T16:15:02.960730.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-27T10:54:54.901743.parquet'
- split: 2023_07_27T16_15_02.960730
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-27T16:15:02.960730.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-27T16:15:02.960730.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_17T05_11_56.274160
path:
- '**/details_harness|winogrande|5_2023-09-17T05-11-56.274160.parquet'
- split: 2023_09_18T06_20_53.119467
path:
- '**/details_harness|winogrande|5_2023-09-18T06-20-53.119467.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T06-20-53.119467.parquet'
- config_name: results
data_files:
- split: 2023_07_27T10_54_54.901743
path:
- results_2023-07-27T10:54:54.901743.parquet
- split: 2023_07_27T16_15_02.960730
path:
- results_2023-07-27T16:15:02.960730.parquet
- split: 2023_09_17T05_11_56.274160
path:
- results_2023-09-17T05-11-56.274160.parquet
- split: 2023_09_18T06_20_53.119467
path:
- results_2023-09-18T06-20-53.119467.parquet
- split: latest
path:
- results_2023-09-18T06-20-53.119467.parquet
---
# Dataset Card for Evaluation run of kfkas/Llama-2-ko-7b-Chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/kfkas/Llama-2-ko-7b-Chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [kfkas/Llama-2-ko-7b-Chat](https://huggingface.co/kfkas/Llama-2-ko-7b-Chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kfkas__Llama-2-ko-7b-Chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T06:20:53.119467](https://huggingface.co/datasets/open-llm-leaderboard/details_kfkas__Llama-2-ko-7b-Chat/blob/main/results_2023-09-18T06-20-53.119467.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.029886744966442953,
"em_stderr": 0.0017437739254467523,
"f1": 0.11206061241610675,
"f1_stderr": 0.002589360675643281,
"acc": 0.3406984196130502,
"acc_stderr": 0.008168649232732146
},
"harness|drop|3": {
"em": 0.029886744966442953,
"em_stderr": 0.0017437739254467523,
"f1": 0.11206061241610675,
"f1_stderr": 0.002589360675643281
},
"harness|gsm8k|5": {
"acc": 0.01288855193328279,
"acc_stderr": 0.003106901266499642
},
"harness|winogrande|5": {
"acc": 0.6685082872928176,
"acc_stderr": 0.01323039719896465
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.