datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
cristianML/bb | ---
license: bigscience-bloom-rail-1.0
---
|
CyberHarem/kizuna_anniversary_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kizuna_anniversary/キズナアイ・アニバーサリー/绊爱·Anniversary (Azur Lane)
This is the dataset of kizuna_anniversary/キズナアイ・アニバーサリー/绊爱·Anniversary (Azur Lane), containing 51 images and their tags.
The core tags of this character are `brown_hair, hairband, multicolored_hair, long_hair, streaked_hair, breasts, pink_hair, medium_breasts, pink_hairband, bangs, blue_eyes, hair_ornament, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 51 | 66.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kizuna_anniversary_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 51 | 39.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kizuna_anniversary_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 116 | 78.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kizuna_anniversary_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 51 | 59.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kizuna_anniversary_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 116 | 110.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kizuna_anniversary_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kizuna_anniversary_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 51 |  |  |  |  |  | virtual_youtuber, 1girl, looking_at_viewer, solo, blush, cleavage, open_mouth, bare_shoulders, dress, :d, white_shorts, strapless, white_background, collarbone, wrist_cuffs, frilled_choker, simple_background, thighlet |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | virtual_youtuber | 1girl | looking_at_viewer | solo | blush | cleavage | open_mouth | bare_shoulders | dress | :d | white_shorts | strapless | white_background | collarbone | wrist_cuffs | frilled_choker | simple_background | thighlet |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------|:--------|:--------------------|:-------|:--------|:-----------|:-------------|:-----------------|:--------|:-----|:---------------|:------------|:-------------------|:-------------|:--------------|:-----------------|:--------------------|:-----------|
| 0 | 51 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
heegyu/hh-rlhf-ko | ---
license: mit
language:
- ko
---
- Original Dataset: [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- Translation by using [maywell/Synatra-7B-v0.3-Translation](https://huggingface.co/maywell/Synatra-7B-v0.3-Translation)
- Translating in progress... |
iamroot/stsb-contrastive-axes | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text_a_embedding
sequence: float32
- name: text_b_embedding
sequence: float32
- name: prompt_embedding
sequence: float32
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: prompt
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 219575612.0
num_examples: 23388
- name: test
num_bytes: 54893903.0
num_examples: 5847
download_size: 311913820
dataset_size: 274469515.0
---
# Glue-STSB with Contrastive Axes
Dataset format:
A pair of sentences, and a prompt along which the sentences are similar or different.
Includes embeddings generated by `sentence-transformers`.
`text_a` and `text_b` are from the Glue-STSB dataset, `prompt` and `label` are machine generated.
|
Foxasdf/common_voice_v15_ar_whisper-base_ar | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 17734638312
num_examples: 18464
- name: test
num_bytes: 6538942824
num_examples: 6808
download_size: 3395753968
dataset_size: 24273581136
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Matsakitkat/test1 | ---
license: cc
---
|
CyberHarem/quency_nikke | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of quency/クエンシー/坎西/퀀시 (Nikke: Goddess of Victory)
This is the dataset of quency/クエンシー/坎西/퀀시 (Nikke: Goddess of Victory), containing 31 images and their tags.
The core tags of this character are `breasts, long_hair, multicolored_hair, black_hair, ponytail, streaked_hair, large_breasts, red_hair, red_eyes, bangs, very_long_hair, two-tone_hair, pink_hair, huge_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 31 | 47.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quency_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 31 | 24.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quency_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 77 | 52.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quency_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 31 | 41.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quency_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 77 | 77.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/quency_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/quency_nikke',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------|
| 0 | 24 |  |  |  |  |  | 1girl, looking_at_viewer, solo, blush, cleavage, navel, thighs, prison_clothes, cuffs, grin, striped_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | blush | cleavage | navel | thighs | prison_clothes | cuffs | grin | striped_shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------|:-----------|:--------|:---------|:-----------------|:--------|:-------|:----------------|
| 0 | 24 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X |
|
yjernite/prof_report__prompthero-openjourney-v4__multi__12 | ---
dataset_info:
features:
- name: cluster_id
dtype: int64
- name: cluster_size
dtype: int64
- name: img_ids
sequence: int64
- name: img_cluster_scores
sequence: float64
splits:
- name: accountant
num_bytes: 1672
num_examples: 3
- name: aerospace_engineer
num_bytes: 1744
num_examples: 6
- name: aide
num_bytes: 1744
num_examples: 6
- name: air_conditioning_installer
num_bytes: 1696
num_examples: 4
- name: architect
num_bytes: 1696
num_examples: 4
- name: artist
num_bytes: 1744
num_examples: 6
- name: author
num_bytes: 1720
num_examples: 5
- name: baker
num_bytes: 1744
num_examples: 6
- name: bartender
num_bytes: 1672
num_examples: 3
- name: bus_driver
num_bytes: 1744
num_examples: 6
- name: butcher
num_bytes: 1696
num_examples: 4
- name: career_counselor
num_bytes: 1720
num_examples: 5
- name: carpenter
num_bytes: 1672
num_examples: 3
- name: carpet_installer
num_bytes: 1672
num_examples: 3
- name: cashier
num_bytes: 1672
num_examples: 3
- name: ceo
num_bytes: 1672
num_examples: 3
- name: childcare_worker
num_bytes: 1744
num_examples: 6
- name: civil_engineer
num_bytes: 1648
num_examples: 2
- name: claims_appraiser
num_bytes: 1720
num_examples: 5
- name: cleaner
num_bytes: 1792
num_examples: 8
- name: clergy
num_bytes: 1696
num_examples: 4
- name: clerk
num_bytes: 1768
num_examples: 7
- name: coach
num_bytes: 1672
num_examples: 3
- name: community_manager
num_bytes: 1720
num_examples: 5
- name: compliance_officer
num_bytes: 1720
num_examples: 5
- name: computer_programmer
num_bytes: 1624
num_examples: 1
- name: computer_support_specialist
num_bytes: 1768
num_examples: 7
- name: computer_systems_analyst
num_bytes: 1696
num_examples: 4
- name: construction_worker
num_bytes: 1696
num_examples: 4
- name: cook
num_bytes: 1768
num_examples: 7
- name: correctional_officer
num_bytes: 1768
num_examples: 7
- name: courier
num_bytes: 1768
num_examples: 7
- name: credit_counselor
num_bytes: 1720
num_examples: 5
- name: customer_service_representative
num_bytes: 1720
num_examples: 5
- name: data_entry_keyer
num_bytes: 1672
num_examples: 3
- name: dental_assistant
num_bytes: 1648
num_examples: 2
- name: dental_hygienist
num_bytes: 1648
num_examples: 2
- name: dentist
num_bytes: 1696
num_examples: 4
- name: designer
num_bytes: 1696
num_examples: 4
- name: detective
num_bytes: 1672
num_examples: 3
- name: director
num_bytes: 1696
num_examples: 4
- name: dishwasher
num_bytes: 1696
num_examples: 4
- name: dispatcher
num_bytes: 1672
num_examples: 3
- name: doctor
num_bytes: 1696
num_examples: 4
- name: drywall_installer
num_bytes: 1672
num_examples: 3
- name: electrical_engineer
num_bytes: 1672
num_examples: 3
- name: electrician
num_bytes: 1648
num_examples: 2
- name: engineer
num_bytes: 1648
num_examples: 2
- name: event_planner
num_bytes: 1672
num_examples: 3
- name: executive_assistant
num_bytes: 1624
num_examples: 1
download_size: 215957
dataset_size: 85016
---
# Dataset Card for "prof_report__prompthero-openjourney-v4__multi__12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NTA-Dev/Blackfeet_1 | ---
license: apache-2.0
---
|
Tippawan/test2-data-semi-trainulb-r3 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: prob
sequence: float64
- name: ifpass
sequence: int64
- name: pred
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 202202230
num_examples: 41738
download_size: 31409433
dataset_size: 202202230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Falah/fox_1_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2953
num_examples: 13
download_size: 3796
dataset_size: 2953
---
# Dataset Card for "fox_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yangyz1230/enhancers_not_filtered | ---
dataset_info:
features:
- name: name
dtype: string
- name: sequence
dtype: string
- name: chrom
dtype: string
- name: start
dtype: int64
- name: end
dtype: int64
- name: strand
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 984517
num_examples: 3948
- name: test
num_bytes: 99273
num_examples: 400
download_size: 509164
dataset_size: 1083790
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Haks12/neuro_patents_sample_finetune_inclass | ---
dataset_info:
features:
- name: appln_id
dtype: int64
- name: appln_filing_date
dtype: string
- name: docdb_family_id
dtype: int64
- name: granted
dtype: string
- name: appln_abstract
dtype: string
- name: appln_abstract_lg
dtype: string
- name: appln_title
dtype: string
- name: applt_coun
dtype: string
- name: invt_coun
dtype: string
- name: cpc
dtype: string
- name: ipc
sequence: string
- name: __index_level_0__
dtype: int64
- name: input
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 8444.0
num_examples: 4
download_size: 24402
dataset_size: 8444.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
joemerson/santana | ---
license: openrail
---
|
royY/customroycode2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5514
num_examples: 39
download_size: 2495
dataset_size: 5514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heliosprime/twitter_dataset_1713045963 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 13059
num_examples: 28
download_size: 9426
dataset_size: 13059
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713045963"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/dataset_20231006_233908 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 762696.0
num_examples: 297
- name: test
num_bytes: 7704.0
num_examples: 3
download_size: 73813
dataset_size: 770400.0
---
# Dataset Card for "dataset_20231006_233908"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
marup/LinechuRVC | ---
license: openrail
---
|
distilled-from-one-sec-cv12/chunk_34 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 651030124
num_examples: 126857
download_size: 666104192
dataset_size: 651030124
---
# Dataset Card for "chunk_34"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_79_1713052121 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 3310673
num_examples: 8161
download_size: 1650756
dataset_size: 3310673
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irodkin/celeba_with_llava_captions | ---
language: en
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 576196360.392
num_examples: 36646
download_size: 257039500
dataset_size: 576196360.392
---
# Dataset Card for "celeba_with_llava_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ibragim-bad/hs_dev_test | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold_generation
dtype: string
splits:
- name: test
num_bytes: 7885767
num_examples: 10003
- name: dev
num_bytes: 9610103
num_examples: 10042
download_size: 10451785
dataset_size: 17495870
---
# Dataset Card for "hs_dev_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
akshatshah1103/retail-faq | ---
license: apache-2.0
---
|
P3ps/Cross_ner | ---
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-academicjournal
'2': I-academicjournal
'3': B-album
'4': I-album
'5': B-algorithm
'6': I-algorithm
'7': B-astronomicalobject
'8': I-astronomicalobject
'9': B-award
'10': I-award
'11': B-band
'12': I-band
'13': B-book
'14': I-book
'15': B-chemicalcompound
'16': I-chemicalcompound
'17': B-chemicalelement
'18': I-chemicalelement
'19': B-conference
'20': I-conference
'21': B-country
'22': I-country
'23': B-discipline
'24': I-discipline
'25': B-election
'26': I-election
'27': B-enzyme
'28': I-enzyme
'29': B-event
'30': I-event
'31': B-field
'32': I-field
'33': B-literarygenre
'34': I-literarygenre
'35': B-location
'36': I-location
'37': B-magazine
'38': I-magazine
'39': B-metrics
'40': I-metrics
'41': B-misc
'42': I-misc
'43': B-musicalartist
'44': I-musicalartist
'45': B-musicalinstrument
'46': I-musicalinstrument
'47': B-musicgenre
'48': I-musicgenre
'49': B-organisation
'50': I-organisation
'51': B-person
'52': I-person
'53': B-poem
'54': I-poem
'55': B-politicalparty
'56': I-politicalparty
'57': B-politician
'58': I-politician
'59': B-product
'60': I-product
'61': B-programlang
'62': I-programlang
'63': B-protein
'64': I-protein
'65': B-researcher
'66': I-researcher
'67': B-scientist
'68': I-scientist
'69': B-song
'70': I-song
'71': B-task
'72': I-task
'73': B-theory
'74': I-theory
'75': B-university
'76': I-university
'77': B-writer
'78': I-writer
splits:
- name: train
num_bytes: 6995502.064669556
num_examples: 20856
- name: test
num_bytes: 1749210.9353304438
num_examples: 5215
download_size: 2609946
dataset_size: 8744713.0
---
# Dataset Card for "Cross_ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rogerdehe/xfund | ---
annotations_creators:
- found
language_creators:
- found
task_categories:
- text-classification
tags:
- layoutlmv3
- xfund
- funsd
language:
- de
- es
- fr
- it
- ja
license:
- other
multilinguality:
- multilingual
---
XFUND dataset
see more detail at [this](https://github.com/doc-analysis/XFUND)
### Citation Information
``` latex
@inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
}
``` |
projectlosangeles/Los-Angeles-MIDI-Dataset | ---
license: cc-by-nc-sa-4.0
tags:
- mir
- music
- midi
- midi-dataset
---
# Los Angeles MIDI Dataset
## SOTA kilo-scale MIDI dataset for MIR and Music AI purposes
***

***
## Search and Explore Los Angeles MIDI dataset
[![Open In Colab][colab-badge]][colab-notebook1]
[colab-notebook1]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Los_Angeles_MIDI_Dataset_Search_and_Explore.ipynb>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
***
## [NEW] Master MIDI Dataset GPU Search and Filter
[![Open In Colab][colab-badge]][colab-notebook5]
[colab-notebook5]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Extras/Master_MIDI_Dataset_GPU_Search_and_Filter.ipynb>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
***
## Master MIDI Dataset Search and Filter
[![Open In Colab][colab-badge]][colab-notebook4]
[colab-notebook4]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Extras/Master_MIDI_Dataset_Search_and_Filter.ipynb>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
***
## Make your own Los Angeles MIDI Dataset from any MIDI scrape
[![Open In Colab][colab-badge]][colab-notebook2]
[colab-notebook2]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/Los_Angeles_MIDI_Dataset_Maker.ipynb>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
***
## Make your own Los Angeles MIDI Dataset Metadata
[![Open In Colab][colab-badge]][colab-notebook3]
[colab-notebook3]: <https://colab.research.google.com/github/asigalov61/Los-Angeles-MIDI-Dataset/blob/main/META-DATA/Los_Angeles_MIDI_Dataset_Metadata_Maker.ipynb>
[colab-badge]: <https://colab.research.google.com/assets/colab-badge.svg>
***
## [Los Angeles MIDI Dataset is now avaialable for download!!!](https://huggingface.co/datasets/projectlosangeles/Los-Angeles-MIDI-Dataset)
***
## Main Features:
### 1) ~405000 100% unique MIDIs to explore :)
### 2) Each MIDI file was read-checked and 100% de-duped
### 3) Extensive meta-data for each MIDI file
### 4) Full chords data for each MIDI file
### 5) Helper Python code
***
## NEW in version 4.0
### 1) Added 160519 new unique MIDIs
### 2) Dataset now contains 404714 MIDIs
### 3) Removed all malformed MIDIs
### 4) Expanded dataset MIDIs metadata
### 5) Added MIDIs chords database
### 6) Updated dataset concept artwork
### Enjoy! :)
***
```bibtex
@inproceedings{lev2024losangelesmididataset,
title = {Los Angeles MIDI Dataset: SOTA kilo-scale MIDI dataset for MIR and Music AI purposes},
author = {Aleksandr Lev},
booktitle = {GitHub},
year = {2024},
}
```
***
### Project Los Angeles
### Tegridy Code 2024
|
FreedomIntelligence/MMLU_Deutsch | ---
license: mit
---
Deutsch version of MMLU dataset tranlasted by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
cbokpark/kmmlu90_test | ---
configs:
- config_name: zerocot
data_files:
- split: test
path: data/kmmlu-90-zerocot.csv
---
|
LPFLEO/newdata1116 | ---
license: mit
---
|
achang/render_text | ---
dataset_info:
features:
- name: review_body
dtype: string
- name: image
dtype: image
- name: section
dtype: image
- name: section_text
dtype: string
- name: pos_section_txt
dtype: string
splits:
- name: train
num_bytes: 6082763237.75
num_examples: 63057
- name: validation
num_bytes: 150630977.75
num_examples: 1561
- name: test
num_bytes: 152523383.75
num_examples: 1581
download_size: 6373937421
dataset_size: 6385917599.25
---
# Dataset Card for "render_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WilliamWen/summarization_cata | ---
license: apache-2.0
---
|
fathyshalab/massive_news-de | ---
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30815
num_examples: 503
- name: validation
num_bytes: 5434
num_examples: 82
- name: test
num_bytes: 7882
num_examples: 124
download_size: 25144
dataset_size: 44131
---
# Dataset Card for "massive_news-de"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ThursdayU/pom | ---
license: bigscience-bloom-rail-1.0
task_categories:
- object-detection
language:
- en
tags:
- biology
pretty_name: pomo
size_categories:
- 1K<n<10K
--- |
Alienmaster/PotTS | ---
language:
- de
license: mit
pretty_name: "PotTS: The Potsdam Twitter Sentiment Corpus"
tags:
- Sentiment Analysis
task_categories:
- text-classification
size_categories:
- 1K<n<10K
configs:
- config_name: default
column_names: ["id", "label", "text", "normalized", "pos", "dependency", "meta"]
data_files:
- split: dev
path: "dev/*.tsv"
- split: test
path: "test/*.tsv"
- split: train
path: "train/*.tsv"
---
# PotTS: The Potsdam Twitter Sentiment Corpus
This dataset contains the Potsdam Twitter Sentiment Corpus based on [this data](https://github.com/WladimirSidorenko/CGSA/tree/master/data/PotTS/not-preprocessed).
The link to the original annotated dataset can be found under Links.
The only difference is that the mixed sentiment is removed (32 dev/55 test/401 train).
### Links
https://github.com/WladimirSidorenko/PotTS
### Citation Information
```
@inproceedings{sidarenka-2016-potts,
title = "{P}ot{TS}: The {P}otsdam {T}witter Sentiment Corpus",
author = "Sidarenka, Uladzimir",
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Goggi, Sara and
Grobelnik, Marko and
Maegaard, Bente and
Mariani, Joseph and
Mazo, Helene and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
month = may,
year = "2016",
address = "Portoro{\v{z}}, Slovenia",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L16-1181"
```
|
tsabar/rvl_cdip_10_examples_per_class_donut | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': letter
'1': form
'2': email
'3': handwritten
'4': advertisement
'5': scientific report
'6': scientific publication
'7': specification
'8': file folder
'9': news article
'10': budget
'11': invoice
'12': presentation
'13': questionnaire
'14': resume
'15': memo
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 18011328.0
num_examples: 160
- name: train
num_bytes: 19396350.0
num_examples: 160
download_size: 35234585
dataset_size: 37407678.0
---
# Dataset Card for "rvl_cdip_10_examples_per_class_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Isaak-Carter/JOSIE_v928.13.63_llama | ---
dataset_info:
features:
- name: sample
dtype: string
splits:
- name: train
num_bytes: 317419
num_examples: 310
download_size: 75633
dataset_size: 317419
---
# Dataset Card for "JOSIE_v928.13.63_llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/m4_sopmod_ii_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of m4_sopmod_ii/M4SOPMODII/M4SOPMODII (Girls' Frontline)
This is the dataset of m4_sopmod_ii/M4SOPMODII/M4SOPMODII (Girls' Frontline), containing 500 images and their tags.
The core tags of this character are `long_hair, multicolored_hair, streaked_hair, red_eyes, red_hair, blonde_hair, bangs, headgear, breasts, hair_between_eyes, pink_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 682.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m4_sopmod_ii_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 358.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m4_sopmod_ii_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1196 | 785.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m4_sopmod_ii_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 590.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/m4_sopmod_ii_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1196 | 1.14 GiB | [Download](https://huggingface.co/datasets/CyberHarem/m4_sopmod_ii_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/m4_sopmod_ii_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, black_dress, looking_at_viewer, official_alternate_costume, smile, solo, single_mechanical_arm, black_gloves, blush, hair_flower, open_mouth, medium_breasts, bare_shoulders, feet_out_of_frame, holding_cup, red_pantyhose, simple_background, white_background, wine_glass |
| 1 | 48 |  |  |  |  |  | 1girl, m4_carbine, solo, smile, holding_gun, black_jacket, looking_at_viewer, gloves, open_mouth, armband, scarf, single_mechanical_arm, simple_background |
| 2 | 11 |  |  |  |  |  | 1girl, black_jacket, looking_at_viewer, simple_background, solo, smile, upper_body, white_background, scarf, armband, blush, cleavage, open_mouth, bandana, fang, hood, medium_breasts |
| 3 | 5 |  |  |  |  |  | 1girl, black_gloves, black_jacket, looking_at_viewer, open_mouth, simple_background, smile, solo, white_background, long_sleeves, upper_body, blush, fang |
| 4 | 10 |  |  |  |  |  | 1boy, 1girl, blush, hetero, solo_focus, navel, nipples, penis, censored, completely_nude, pussy, sex, spread_legs, collarbone, medium_breasts, open_mouth, sweat, vaginal, looking_at_viewer, pov, erection, on_back, smile, stomach |
| 5 | 9 |  |  |  |  |  | day, looking_at_viewer, outdoors, 1girl, beach, blue_sky, blush, cloud, collarbone, smile, solo, bare_shoulders, black_bikini, cleavage, navel, ocean, large_breasts, barefoot, open_mouth, thighs, wet, hair_flower, sand, stomach, water |
| 6 | 7 |  |  |  |  |  | 1girl, serafuku, smile, looking_at_viewer, red_neckerchief, solo, blush, open_mouth, pleated_skirt, blue_sailor_collar, blue_skirt, shirt, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_dress | looking_at_viewer | official_alternate_costume | smile | solo | single_mechanical_arm | black_gloves | blush | hair_flower | open_mouth | medium_breasts | bare_shoulders | feet_out_of_frame | holding_cup | red_pantyhose | simple_background | white_background | wine_glass | m4_carbine | holding_gun | black_jacket | gloves | armband | scarf | upper_body | cleavage | bandana | fang | hood | long_sleeves | 1boy | hetero | solo_focus | navel | nipples | penis | censored | completely_nude | pussy | sex | spread_legs | collarbone | sweat | vaginal | pov | erection | on_back | stomach | day | outdoors | beach | blue_sky | cloud | black_bikini | ocean | large_breasts | barefoot | thighs | wet | sand | water | serafuku | red_neckerchief | pleated_skirt | blue_sailor_collar | blue_skirt | shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:--------------------|:-----------------------------|:--------|:-------|:------------------------|:---------------|:--------|:--------------|:-------------|:-----------------|:-----------------|:--------------------|:--------------|:----------------|:--------------------|:-------------------|:-------------|:-------------|:--------------|:---------------|:---------|:----------|:--------|:-------------|:-----------|:----------|:-------|:-------|:---------------|:-------|:---------|:-------------|:--------|:----------|:--------|:-----------|:------------------|:--------|:------|:--------------|:-------------|:--------|:----------|:------|:-----------|:----------|:----------|:------|:-----------|:--------|:-----------|:--------|:---------------|:--------|:----------------|:-----------|:---------|:------|:-------|:--------|:-----------|:------------------|:----------------|:---------------------|:-------------|:--------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 48 |  |  |  |  |  | X | | X | | X | X | X | | | | X | | | | | | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 11 |  |  |  |  |  | X | | X | | X | X | | | X | | X | X | | | | | X | X | | | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | X | | X | X | | X | X | | X | | | | | | X | X | | | | X | | | | X | | | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | X | | X | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | | X | | X | X | | | X | X | X | | X | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | X | | X | X | | | X | | X | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
leoleoasd/the-stack-dedup-python-filtered-dec_gen_async-gpt2 | ---
dataset_info:
features:
- name: content
dtype: string
- name: input_ids
sequence: int32
- name: ratio_char_token
dtype: float64
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 162183078606
num_examples: 12960052
download_size: 47468031271
dataset_size: 162183078606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
knkarthick/highlightsum | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: HighlightSum Corpus
---
# Dataset Card for HighlightSum Corpus [Single Dataset Comprising of AMI, SamSUM & DialogSUM for Brief Summarization of Text]
## Dataset Description
### Links
- **AMI:** https://huggingface.co/datasets/knkarthick/AMI
- **DialogSUM:** https://github.com/cylnlp/dialogsum
- **SamSUM:** https://huggingface.co/datasets/knkarthick/samsum
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
HighlightSUM is collection of large-scale dialogue summarization dataset from AMI, SamSUM & DialogSUM, consisting of 31,108 dialogues with corresponding manually labeled summaries.
### Languages
English
## Dataset Structure
### Data Instances
HighlightSum is a large-scale dialogue summarization dataset collection, consisting of 31,108 dialogues split into train, test and validation.
The first instance in the training set:
{'id': 'train_0',
'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.",
'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor."}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 27401
- val: 1360
- test: 2347
## Dataset Creation
### Curation Rationale
Collection of AMI, SamSUM & DialogSUM Datasets.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
non-commercial licence: MIT
## Citation Information
Refer the above links for Credits & Citations. |
distil-whisper/tedlium-prompted | ---
dataset_info:
config_name: release3
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: gender
dtype:
class_label:
names:
'0': unknown
'1': female
'2': male
- name: file
dtype: string
- name: id
dtype: string
- name: whisper_transcript_unprompted
dtype: string
- name: whisper_transcript
dtype: string
splits:
- name: train
num_bytes: 52484152554.125
num_examples: 268263
- name: validation
num_bytes: 184679438.0
num_examples: 507
- name: test
num_bytes: 302513272.625
num_examples: 1155
download_size: 52650349441
dataset_size: 52971345264.75
configs:
- config_name: release3
data_files:
- split: train
path: release3/train-*
- split: validation
path: release3/validation-*
- split: test
path: release3/test-*
---
# Dataset Card for "tedlium-prompted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anumafzal94/arxiv-2shot-4096 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: few-shot
dtype: bool
splits:
- name: test
num_bytes: 3262821.167598633
num_examples: 97
- name: train
num_bytes: 73114333.94539191
num_examples: 2066
download_size: 5283534
dataset_size: 76377155.11299054
---
# Dataset Card for "arxiv-2shot-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
text-machine-lab/NEG-1500-SIMP-GEN | ---
license: mit
---
NEG-1500-SIMP-GEN is an extended version of NEG-136-SIMP. The dataset is extended using GPT-3.
If this dataset is useful to you please cite our work
@article{shivagunde2023larger,
title={Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning},
author={Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna},
journal={arXiv preprint arXiv:2303.16445},
year={2023}
} |
manishiitg/chat-instruct-hi-v4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 1776098461.760008
num_examples: 538108
download_size: 890758727
dataset_size: 1776098461.760008
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gopikrsmscs/torch-issues | ---
license: apache-2.0
pretty_name: Pytorch Github Issues Metadata
size_categories:
- 1K<n<10K
task_categories:
- feature-extraction
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
slegalAI/btp2 | ---
license: other
---
|
Falah/chapter3_0_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 4051
num_examples: 13
download_size: 5038
dataset_size: 4051
---
# Dataset Card for "chapter3_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
benayas/banking_augmented_10pct_v0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1018873
num_examples: 10003
download_size: 411598
dataset_size: 1018873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hakeem750/sql__mini | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 261293.9
num_examples: 1050
- name: test
num_bytes: 111983.1
num_examples: 450
download_size: 153254
dataset_size: 373277.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
loubnabnl/sample_kaggle | ---
dataset_info:
features:
- name: script
dtype: string
splits:
- name: train
num_bytes: 44044746
num_examples: 5000
download_size: 20169743
dataset_size: 44044746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
luoruipu1/Valley-Instruct-65k | ---
license: apache-2.0
---
We released the data for the second stage of valley training, a total of 65K, the data comes from the public video website ttps://www.jukinmedia.com/licensing, and the open source multimodal data sets VATEX and VIOLIN.It consists of four aspects, detailed description, complex reasoning, causal inference and conversation. The detailed description and complex reasoning come from jukinmedia, the conversation comes from VATEX, and the causal inference comes from VIOLIN. Because causal inference data is too difficult, the data of 65k version does not include causal inference data. We release all data include causal inference data to facilitate VLLM research of the community.
Since the video URL of jukinmedia is dynamic, we provide a script `get_jukinmedia_videourl.py` to get the video of jukinmedia.
The VATEX part in `valley_instruct_65k needs` to be downloaded from Youtube and the vid is represented as \[youtube_id\]_\[start_second\]_\[end_second\], you also need to crop the video according to start and end second. |
kllmagn/memEditor_Captions | ---
license: openrail
---
|
open-llm-leaderboard/details_Danielbrdz__CodeBarcenas-7b | ---
pretty_name: Evaluation run of Danielbrdz/CodeBarcenas-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Danielbrdz/CodeBarcenas-7b](https://huggingface.co/Danielbrdz/CodeBarcenas-7b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Danielbrdz__CodeBarcenas-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-09-18T02:45:17.599730](https://huggingface.co/datasets/open-llm-leaderboard/details_Danielbrdz__CodeBarcenas-7b/blob/main/results_2023-09-18T02-45-17.599730.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\
\ \"em_stderr\": 0.0003630560893119179,\n \"f1\": 0.04712458053691294,\n\
\ \"f1_stderr\": 0.0011987531964379016,\n \"acc\": 0.31440371523474825,\n\
\ \"acc_stderr\": 0.009024224601859619\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.0003630560893119179,\n\
\ \"f1\": 0.04712458053691294,\n \"f1_stderr\": 0.0011987531964379016\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.025018953752843062,\n \
\ \"acc_stderr\": 0.004302045046564279\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6037884767166535,\n \"acc_stderr\": 0.01374640415715496\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Danielbrdz/CodeBarcenas-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|arc:challenge|25_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_09_18T02_45_17.599730
path:
- '**/details_harness|drop|3_2023-09-18T02-45-17.599730.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-09-18T02-45-17.599730.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_09_18T02_45_17.599730
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-45-17.599730.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-09-18T02-45-17.599730.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hellaswag|10_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-05T12:21:59.082242.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-05T12:21:59.082242.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-05T12:21:59.082242.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_09_18T02_45_17.599730
path:
- '**/details_harness|winogrande|5_2023-09-18T02-45-17.599730.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-09-18T02-45-17.599730.parquet'
- config_name: results
data_files:
- split: 2023_09_05T12_21_59.082242
path:
- results_2023-09-05T12:21:59.082242.parquet
- split: 2023_09_18T02_45_17.599730
path:
- results_2023-09-18T02-45-17.599730.parquet
- split: latest
path:
- results_2023-09-18T02-45-17.599730.parquet
---
# Dataset Card for Evaluation run of Danielbrdz/CodeBarcenas-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Danielbrdz/CodeBarcenas-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Danielbrdz/CodeBarcenas-7b](https://huggingface.co/Danielbrdz/CodeBarcenas-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Danielbrdz__CodeBarcenas-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-18T02:45:17.599730](https://huggingface.co/datasets/open-llm-leaderboard/details_Danielbrdz__CodeBarcenas-7b/blob/main/results_2023-09-18T02-45-17.599730.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.0003630560893119179,
"f1": 0.04712458053691294,
"f1_stderr": 0.0011987531964379016,
"acc": 0.31440371523474825,
"acc_stderr": 0.009024224601859619
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.0003630560893119179,
"f1": 0.04712458053691294,
"f1_stderr": 0.0011987531964379016
},
"harness|gsm8k|5": {
"acc": 0.025018953752843062,
"acc_stderr": 0.004302045046564279
},
"harness|winogrande|5": {
"acc": 0.6037884767166535,
"acc_stderr": 0.01374640415715496
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
autoevaluate/autoeval-eval-banking77-default-b28a77-98055146974 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- banking77
eval_info:
task: multi_class_classification
model: Kirie/test-bert-base-banking77
metrics: []
dataset_name: banking77
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Kirie/test-bert-base-banking77
* Dataset: banking77
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@i got my credit card](https://huggingface.co/i got my credit card) for evaluating this model. |
open-llm-leaderboard/details_netcat420__MFANNv0.4 | ---
pretty_name: Evaluation run of netcat420/MFANNv0.4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [netcat420/MFANNv0.4](https://huggingface.co/netcat420/MFANNv0.4) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_netcat420__MFANNv0.4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-09T06:47:38.111444](https://huggingface.co/datasets/open-llm-leaderboard/details_netcat420__MFANNv0.4/blob/main/results_2024-04-09T06-47-38.111444.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6371943268205861,\n\
\ \"acc_stderr\": 0.03246901392972694,\n \"acc_norm\": 0.6377161445813604,\n\
\ \"acc_norm_stderr\": 0.03312827704029973,\n \"mc1\": 0.5336597307221542,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.7139881282663555,\n\
\ \"mc2_stderr\": 0.01519479061727556\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6706484641638225,\n \"acc_stderr\": 0.013734057652635474,\n\
\ \"acc_norm\": 0.6953924914675768,\n \"acc_norm_stderr\": 0.013449522109932483\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7017526389165505,\n\
\ \"acc_stderr\": 0.004565536808632543,\n \"acc_norm\": 0.8665604461262697,\n\
\ \"acc_norm_stderr\": 0.003393542074227652\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n\
\ \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n \
\ \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\"\
: 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n\
\ \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n\
\ \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n\
\ \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n\
\ \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5319148936170213,\n\
\ \"acc_stderr\": 0.03261936918467381,\n \"acc_norm\": 0.5319148936170213,\n\
\ \"acc_norm_stderr\": 0.03261936918467381\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.046854730419077895,\n\
\ \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.046854730419077895\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\"\
: 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n\
\ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n\
\ \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n\
\ \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\"\
: {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n\
\ \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n\
\ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n\
\ \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \
\ \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\"\
: {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642518,\n\
\ \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642518\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"\
acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\"\
: 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.03287666758603489,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.03287666758603489\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6487179487179487,\n \"acc_stderr\": 0.024203665177902803,\n\
\ \"acc_norm\": 0.6487179487179487,\n \"acc_norm_stderr\": 0.024203665177902803\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131137,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131137\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.4105960264900662,\n \"acc_stderr\": 0.04016689594849929,\n \"\
acc_norm\": 0.4105960264900662,\n \"acc_norm_stderr\": 0.04016689594849929\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8275229357798165,\n \"acc_stderr\": 0.016197807956848036,\n \"\
acc_norm\": 0.8275229357798165,\n \"acc_norm_stderr\": 0.016197807956848036\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"\
acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"\
acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601436,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601436\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.03076935200822915,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.03076935200822915\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.743801652892562,\n \"acc_stderr\": 0.039849796533028725,\n \"\
acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.039849796533028725\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615624,\n\
\ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615624\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n\
\ \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \
\ \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.022509033937077802,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.022509033937077802\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8148148148148148,\n\
\ \"acc_stderr\": 0.01389086216287616,\n \"acc_norm\": 0.8148148148148148,\n\
\ \"acc_norm_stderr\": 0.01389086216287616\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38324022346368714,\n\
\ \"acc_stderr\": 0.016260159604429128,\n \"acc_norm\": 0.38324022346368714,\n\
\ \"acc_norm_stderr\": 0.016260159604429128\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n\
\ \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886324,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886324\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \
\ \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n\
\ \"acc_stderr\": 0.012745204626083138,\n \"acc_norm\": 0.46870925684485004,\n\
\ \"acc_norm_stderr\": 0.012745204626083138\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6433823529411765,\n \"acc_stderr\": 0.029097209568411945,\n\
\ \"acc_norm\": 0.6433823529411765,\n \"acc_norm_stderr\": 0.029097209568411945\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.0293936093198798,\n\
\ \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.0293936093198798\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421606,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421606\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.038641399236991225,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.038641399236991225\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5336597307221542,\n\
\ \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.7139881282663555,\n\
\ \"mc2_stderr\": 0.01519479061727556\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7947908445146015,\n \"acc_stderr\": 0.011350315707462063\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6383623957543594,\n \
\ \"acc_stderr\": 0.013234658351088776\n }\n}\n```"
repo_url: https://huggingface.co/netcat420/MFANNv0.4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|arc:challenge|25_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|gsm8k|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hellaswag|10_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-47-38.111444.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T06-47-38.111444.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- '**/details_harness|winogrande|5_2024-04-09T06-47-38.111444.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-09T06-47-38.111444.parquet'
- config_name: results
data_files:
- split: 2024_04_09T06_47_38.111444
path:
- results_2024-04-09T06-47-38.111444.parquet
- split: latest
path:
- results_2024-04-09T06-47-38.111444.parquet
---
# Dataset Card for Evaluation run of netcat420/MFANNv0.4
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [netcat420/MFANNv0.4](https://huggingface.co/netcat420/MFANNv0.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_netcat420__MFANNv0.4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-09T06:47:38.111444](https://huggingface.co/datasets/open-llm-leaderboard/details_netcat420__MFANNv0.4/blob/main/results_2024-04-09T06-47-38.111444.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6371943268205861,
"acc_stderr": 0.03246901392972694,
"acc_norm": 0.6377161445813604,
"acc_norm_stderr": 0.03312827704029973,
"mc1": 0.5336597307221542,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.7139881282663555,
"mc2_stderr": 0.01519479061727556
},
"harness|arc:challenge|25": {
"acc": 0.6706484641638225,
"acc_stderr": 0.013734057652635474,
"acc_norm": 0.6953924914675768,
"acc_norm_stderr": 0.013449522109932483
},
"harness|hellaswag|10": {
"acc": 0.7017526389165505,
"acc_stderr": 0.004565536808632543,
"acc_norm": 0.8665604461262697,
"acc_norm_stderr": 0.003393542074227652
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.02845015479411864,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.02845015479411864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467381,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467381
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.03287666758603489,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.03287666758603489
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131137,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131137
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4105960264900662,
"acc_stderr": 0.04016689594849929,
"acc_norm": 0.4105960264900662,
"acc_norm_stderr": 0.04016689594849929
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8275229357798165,
"acc_stderr": 0.016197807956848036,
"acc_norm": 0.8275229357798165,
"acc_norm_stderr": 0.016197807956848036
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601436,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601436
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822915,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822915
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.039849796533028725,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.039849796533028725
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615624,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615624
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.04058042015646034,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.04058042015646034
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077802,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077802
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.01389086216287616,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.01389086216287616
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38324022346368714,
"acc_stderr": 0.016260159604429128,
"acc_norm": 0.38324022346368714,
"acc_norm_stderr": 0.016260159604429128
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886324,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886324
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.012745204626083138,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.012745204626083138
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6433823529411765,
"acc_stderr": 0.029097209568411945,
"acc_norm": 0.6433823529411765,
"acc_norm_stderr": 0.029097209568411945
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495148,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495148
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.044612721759105085,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.044612721759105085
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6979591836734694,
"acc_stderr": 0.0293936093198798,
"acc_norm": 0.6979591836734694,
"acc_norm_stderr": 0.0293936093198798
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421606,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421606
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.038641399236991225,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.038641399236991225
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5336597307221542,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.7139881282663555,
"mc2_stderr": 0.01519479061727556
},
"harness|winogrande|5": {
"acc": 0.7947908445146015,
"acc_stderr": 0.011350315707462063
},
"harness|gsm8k|5": {
"acc": 0.6383623957543594,
"acc_stderr": 0.013234658351088776
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
kyujinpy/KOR-OpenOrca-Platypus-v2 | ---
language:
- ko
license: cc-by-nc-4.0
size_categories:
- 10K<n<50K
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 75960444
num_examples: 44394
download_size: 38457343
dataset_size: 75960444
---
# KOR-OpenOrca-Platypus-v2
- KOR-OpenOrca-Platypus 데이터셋에서 수작업으로 번역 오류 200건 이상을 고친 데이터셋.
- 데이터셋 이용하셔서 모델이나 데이터셋을 만드실 때, 간단한 출처 표기를 해주신다면 연구에 큰 도움이 됩니다😭😭
## KOpen-platpyus
Repo: [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
- 고품질 한국어 데이터셋
1. 코드와 주석은 그대로 유지하고, 설명 부분만 한국어로 수정
2. 1번과 더불어서, Python, Java, Cpp, xml 등등 결과들은 전부 기존의 데이터 형태로 최대한 보존
3. 단일 숫자와 영어는 본래의 결과 그대로 가져옴
4. DeepL Pro 번역 결과 중 미완성 변역 결과 직접 수정(예를 들면, '[...]'가 포함되어 있음)
5. DeepL Pro 번역 결과가 본래의 데이터에 비해 글자수가 50% 이하로 낮으면, 번역 결과 수정
6. 번역하고자 하는 글자수가 1500자 이상일 경우, API로 변경해서 번역
7. 고유명사는 최대한 유지함
> Post-processing 작업 내용
## OpenOrca-Ko-v2
1. NIV // 약 1500개
2. FLAN // 약 9000개
3. T0 // 약 6000개
4. CoT // 약 2000개
> Dataset 구성
- 수작업으로 고친 내용(v2)
1. 영어로 된 답변 수정. (Ex. Nick -> 닉, Lucky -> 운이 좋음, ...)
2. KoCoT 데이터셋 제거.
3. Yes, True, False 등등 일부 답변 수정
> Post-processing 작업 내용
## Translation
Using DeepL Pro API. Thanks.
---
>Below is original dataset card
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Attribution](#dataset-attribution)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Dataset Use](#dataset-use)
- [Use Cases](#use-cases)
- [Usage Caveats](#usage-caveats)
- [Getting Started](#getting-started)
<p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p>

<a name="dataset-announcement"></a>
We are thrilled to announce the release of the OpenOrca dataset!
This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707).
It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!
# Official Models
## OpenOrca-Platypus2-13B
Our [latest release](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!
Released in partnership with Platypus.
## LlongOrca 7B & 13B
* Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.
* [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.
## OpenOrcaxOpenChat-Preview2-13B
Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper.
Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.
Released in partnership with OpenChat.
## OpenOrca-Preview1-13B
[OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B)
This model was trained in less than a day, for <$200, with <10% of our data.
At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.
<a name="dataset-summary"></a>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-attribution"></a>
# Dataset Attribution
We would like to give special recognition to the following contributors for their significant efforts and dedication:
Teknium
WingLian/Caseus
Eric Hartford
NanoBit
Pankaj
Winddude
Rohan
http://AlignmentLab.ai:
Autometa
Entropi
AtlasUnified
NeverendingToast
NanoBit
WingLian/Caseus
Also of course, as always, TheBloke, for being the backbone of the whole community.
Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others!
We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:
http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx
Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
<a name="supported-tasks-and-leaderboards"></a>
# Supported Tasks and Leaderboards
This dataset supports a range of tasks including language modeling, text generation, and text augmentation.
It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.
Further information on leaderboards will be updated as they become available.
<a name="languages"></a>
# Languages
The language of the data is primarily English.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
<a name="data-splits"></a>
## Data Splits
The data is unsplit.
<a name="dataset-creation"></a>
# Dataset Creation
<a name="curation-rationale"></a>
## Curation Rationale
The dataset was created to provide a source of augmented text data for researchers and developers.
The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.
This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.
<a name="source-data"></a>
## Source Data
The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:
1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.
We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.
2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original).
These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source.
However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.
Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.
<a name="dataset-use"></a>
# Dataset Use
<a name="use-cases"></a>
## Use Cases
The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.
<a name="usage-caveats"></a>
## Usage Caveats
Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.
Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.
<a name="getting-started"></a>
## Getting Started
This dataset is organized such that it can be naively loaded via Hugging Face datasets library.
We recommend using streaming due to the large size of the files.
Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
# Citation
```bibtex
@misc{OpenOrca,
title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces},
author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca},
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@software{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
``` |
prnv13/landcover_data_labels | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 1664170873.0
num_examples: 642
- name: validation
num_bytes: 206107466.0
num_examples: 80
- name: test
num_bytes: 202687518.0
num_examples: 81
download_size: 2071092233
dataset_size: 2072965857.0
---
# Dataset Card for "landcover_data_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
obann001/20pic_b3ndavy_1024 | ---
license: openrail
---
|
Seanxh/twitter_dataset_1713097628 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 24481
num_examples: 58
download_size: 14068
dataset_size: 24481
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-59500 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 2895337332
num_examples: 500
download_size: 594755822
dataset_size: 2895337332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
IainRatherThanIan/TouchRugby | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- touch rugby
size_categories:
- n<1K
---
# Touch Rugby Rules Dataset (for embeddings)
train.csv is taken from the [International Touch Website](https://cdn.internationaltouch.org/public/FIT%205th%20Edition%20Rulebook.pdf)
test.csv is copy pasted from abbreviated rules on the [UK Touch website](https://www.englandtouch.org.uk/develop/coaching/the-rules/). Note that I'm bypassing the pdf to text stage.
All text is chunked to a length of 100 tokens with 50% overlap.
For educational and non-commercial use only. |
softcatala/catalan-dictionary | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ca
license:
- gpl-2.0
- lgpl-2.1
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: catalan-dictionary
---
# Dataset Card for ca-text-corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/Softcatala/catalan-dict-tools
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Catalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Catalan (`ca`).
## Dataset Structure
The dataset contains 3 columns:
* Form (e.g. cantaré)
* Lemma (e.g. cantar)
* POS tag (e.g. VMIF1S00)
You can have the meaning of the POS tag here: https://freeling-user-manual.readthedocs.io/en/latest/tagsets/tagset-ca/#part-of-speech-verb
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[LGPL 2.1](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html).
[GPL 2.0](https://www.gnu.org/licenses/old-licenses/gpl-2.0.html).
### Citation Information
[More Information Needed]
### Contributions
Softcatalà
Jaume Ortolà
Joan Moratinos |
lily-hust/Tree-clips-RS-imagery | ---
license: mit
---
This dataset includes a collection of image samples of Jacaranda, Palm and others.
They were clipped from Eagle Aerial images of Orange County, California.
These samples have been used for training a deep learning model to classify Jacaranda, and can also be used to train a model for Palm. |
roa7n/patched_test_p_200_f_membrane_m1_predictions | ---
dataset_info:
features:
- name: id
dtype: string
- name: sequence_str
dtype: string
- name: label
dtype: int64
- name: m1_preds
dtype: float32
splits:
- name: train
num_bytes: 1379688652
num_examples: 2057621
download_size: 120998098
dataset_size: 1379688652
---
# Dataset Card for "patched_test_p_200_f_membrane_m1_predictions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Shanghai_Dialect_Speech_Data_by_Mobile_Phone | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Shanghai_Dialect_Speech_Data_by_Mobile_Phone
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/56?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
It collects 2.956 speakers from Shanghai and is recorded in quiet indoor environment. The recorded content includes multi-domain customer consultation, short messages, numbers, Shanghai POI, etc. The corpus has no repetition and the average sentence length is 12.68 words. Recording devices are mainstream Android phones and iPhones.
For more details, please refer to the link: https://www.nexdata.ai/datasets/56?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Shanghai Dialect
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
zambezivoice/zambezivoice_toi_text | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 777892
num_examples: 8881
download_size: 438920
dataset_size: 777892
---
# Dataset Card for "zambezivoice_toi_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/bismarck_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of bismarck/ビスマルク/俾斯麦 (Azur Lane)
This is the dataset of bismarck/ビスマルク/俾斯麦 (Azur Lane), containing 500 images and their tags.
The core tags of this character are `blonde_hair, long_hair, blue_eyes, breasts, large_breasts, hat, peaked_cap, hair_between_eyes, military_hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 690.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 384.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1251 | 838.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 606.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1251 | 1.16 GiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/bismarck_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bare_shoulders, detached_sleeves, looking_at_viewer, military_uniform, solo, brown_gloves, white_background, simple_background, blush, grey_thighhighs, upper_body |
| 1 | 15 |  |  |  |  |  | 1girl, detached_sleeves, gloves, military_uniform, solo, bare_shoulders, grey_thighhighs, looking_at_viewer, smile, cross, blush |
| 2 | 10 |  |  |  |  |  | 1girl, bare_shoulders, detached_sleeves, grey_thighhighs, looking_at_viewer, military_uniform, solo, turret, cannon, machinery, brown_gloves, blush, panties, smile |
| 3 | 16 |  |  |  |  |  | military_uniform, 1girl, solo, white_gloves, cleavage, fur-trimmed_cape, black_headwear, looking_at_viewer, holding, black_thighhighs, bangs, flag, black_skirt, fur-trimmed_legwear, jacket |
| 4 | 10 |  |  |  |  |  | capelet, military_uniform, miniskirt, twintails, sidelocks, iron_cross, looking_at_viewer, pleated_skirt, 1girl, black_gloves, black_pantyhose, black_skirt, necktie, solo, long_sleeves, multiple_girls, simple_background |
| 5 | 6 |  |  |  |  |  | 1girl, navel, solo, collarbone, looking_at_viewer, simple_background, black_bra, blush, cleavage, white_background, black_panties, open_mouth, upper_body |
| 6 | 7 |  |  |  |  |  | cleavage, dirndl, looking_at_viewer, 1girl, alternate_costume, beer_mug, blush, holding_cup, solo, choker, dress, iron_cross, waist_apron, blue_apron, medium_breasts, puffy_short_sleeves, smile, white_background |
| 7 | 9 |  |  |  |  |  | 1girl, navel, solo, blush, looking_at_viewer, cleavage, black_bikini, smile |
| 8 | 5 |  |  |  |  |  | 1girl, completely_nude, full_body, looking_at_viewer, nipples, solo, barefoot, blush, navel, smile, armpits, ass, feet, standing, very_long_hair |
| 9 | 11 |  |  |  |  |  | 1girl, bangs, cleavage, looking_at_viewer, black_leotard, solo, covered_navel, huge_breasts, lips, thick_thighs, black_thighhighs, choker, covered_nipples, curvy, blush, cape, collarbone, sitting, smile, veiny_breasts, bare_shoulders, highleg_leotard, skindentation, very_long_hair |
| 10 | 9 |  |  |  |  |  | 1boy, blush, hetero, nipples, penis, sex, 1girl, vaginal, censored, solo_focus, navel, bangs, completely_nude, cum_in_pussy, cowgirl_position, girl_on_top, open_mouth, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | detached_sleeves | looking_at_viewer | military_uniform | solo | brown_gloves | white_background | simple_background | blush | grey_thighhighs | upper_body | gloves | smile | cross | turret | cannon | machinery | panties | white_gloves | cleavage | fur-trimmed_cape | black_headwear | holding | black_thighhighs | bangs | flag | black_skirt | fur-trimmed_legwear | jacket | capelet | miniskirt | twintails | sidelocks | iron_cross | pleated_skirt | black_gloves | black_pantyhose | necktie | long_sleeves | multiple_girls | navel | collarbone | black_bra | black_panties | open_mouth | dirndl | alternate_costume | beer_mug | holding_cup | choker | dress | waist_apron | blue_apron | medium_breasts | puffy_short_sleeves | black_bikini | completely_nude | full_body | nipples | barefoot | armpits | ass | feet | standing | very_long_hair | black_leotard | covered_navel | huge_breasts | lips | thick_thighs | covered_nipples | curvy | cape | sitting | veiny_breasts | highleg_leotard | skindentation | 1boy | hetero | penis | sex | vaginal | censored | solo_focus | cum_in_pussy | cowgirl_position | girl_on_top | sweat |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:-------------------|:--------------------|:-------------------|:-------|:---------------|:-------------------|:--------------------|:--------|:------------------|:-------------|:---------|:--------|:--------|:---------|:---------|:------------|:----------|:---------------|:-----------|:-------------------|:-----------------|:----------|:-------------------|:--------|:-------|:--------------|:----------------------|:---------|:----------|:------------|:------------|:------------|:-------------|:----------------|:---------------|:------------------|:----------|:---------------|:-----------------|:--------|:-------------|:------------|:----------------|:-------------|:---------|:--------------------|:-----------|:--------------|:---------|:--------|:--------------|:-------------|:-----------------|:----------------------|:---------------|:------------------|:------------|:----------|:-----------|:----------|:------|:-------|:-----------|:-----------------|:----------------|:----------------|:---------------|:-------|:---------------|:------------------|:--------|:-------|:----------|:----------------|:------------------|:----------------|:-------|:---------|:--------|:------|:----------|:-----------|:-------------|:---------------|:-------------------|:--------------|:--------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | X | X | X | X | X | | | | X | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | | | X | X | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | | | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | | | X | X | X | | | X | | | | | | | | | | | | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | X | | X | | X | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | | | X | | X | | X | | X | | | | X | | | | | | | X | | | | | | | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 9 |  |  |  |  |  | X | | | X | | X | | | | X | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | X | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 11 |  |  |  |  |  | X | X | | X | | X | | | | X | | | | X | | | | | | | X | | | | X | X | | | | | | | | | | | | | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 10 | 9 |  |  |  |  |  | X | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | | X | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X |
|
unreal-hug/REAL_DATASET_SEG_401_6_lbls | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 108706033.0
num_examples: 401
download_size: 7968686
dataset_size: 108706033.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/VALUE_mnli_null_genetive | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: train
num_bytes: 12248137
num_examples: 50122
- name: dev_matched
num_bytes: 283868
num_examples: 1167
- name: dev_mismatched
num_bytes: 330715
num_examples: 1276
- name: test_matched
num_bytes: 297546
num_examples: 1245
- name: test_mismatched
num_bytes: 343629
num_examples: 1336
download_size: 8810876
dataset_size: 13503895
---
# Dataset Card for "VALUE2_mnli_null_genetive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jordanfan/billsum_abstracted_us_congress_117_bills_all | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: index
dtype: int64
- name: id
dtype: string
- name: policy_areas
dtype: string
- name: cur_summary
dtype: string
- name: cur_text
dtype: string
- name: title
dtype: string
- name: titles_official
dtype: string
- name: titles_short
dtype: string
- name: sponsor_name
dtype: string
- name: sponsor_party
dtype: string
- name: sponsor_state
dtype: string
- name: cleaned_summary
dtype: string
- name: extracted_text
dtype: string
- name: extracted_text_375
dtype: string
- name: extracted_text_750
dtype: string
- name: extracted_text_1000
dtype: string
- name: bertsum_extracted_250
dtype: string
- name: bertsum_extracted_375
dtype: string
- name: bertsum_extracted_375_1000
dtype: string
- name: bertsum_extracted_250_1000
dtype: string
- name: bertsum_extracted_375_750
dtype: string
- name: bertsum_extracted_250_750
dtype: string
- name: bertsum_extracted_375_500
dtype: string
- name: bertsum_extracted_250_500
dtype: string
- name: bertsum_extracted_375_375
dtype: string
- name: bertsum_extracted_250_375
dtype: string
- name: text_len
dtype: int64
- name: billsum_abstracted_1000
dtype: string
- name: billsum_abstracted_500
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 464620556
num_examples: 10988
- name: val
num_bytes: 141019446
num_examples: 3310
- name: test
num_bytes: 15605214
num_examples: 367
download_size: 279701214
dataset_size: 621245216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_Locutusque__NeuralHyperion-2.0-Mistral-7B | ---
pretty_name: Evaluation run of Locutusque/NeuralHyperion-2.0-Mistral-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Locutusque/NeuralHyperion-2.0-Mistral-7B](https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Locutusque__NeuralHyperion-2.0-Mistral-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-11T18:07:24.670248](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__NeuralHyperion-2.0-Mistral-7B/blob/main/results_2024-03-11T18-07-24.670248.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6172007555487146,\n\
\ \"acc_stderr\": 0.03265337030468996,\n \"acc_norm\": 0.6217600201613162,\n\
\ \"acc_norm_stderr\": 0.03331696996626112,\n \"mc1\": 0.3072215422276622,\n\
\ \"mc1_stderr\": 0.016150201321323006,\n \"mc2\": 0.4549937684572924,\n\
\ \"mc2_stderr\": 0.014438205280783314\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5503412969283277,\n \"acc_stderr\": 0.01453714444428473,\n\
\ \"acc_norm\": 0.5776450511945392,\n \"acc_norm_stderr\": 0.014434138713379976\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6139215295757817,\n\
\ \"acc_stderr\": 0.004858539527872462,\n \"acc_norm\": 0.8229436367257519,\n\
\ \"acc_norm_stderr\": 0.003809362761248109\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n\
\ \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n\
\ \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6447368421052632,\n \"acc_stderr\": 0.03894734487013317,\n\
\ \"acc_norm\": 0.6447368421052632,\n \"acc_norm_stderr\": 0.03894734487013317\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6716981132075471,\n \"acc_stderr\": 0.028901593612411784,\n \
\ \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.028901593612411784\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.03773809990686934,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.03773809990686934\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n\
\ \"acc_stderr\": 0.03669072477416906,\n \"acc_norm\": 0.6358381502890174,\n\
\ \"acc_norm_stderr\": 0.03669072477416906\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062947,\n\
\ \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062947\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n\
\ \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n\
\ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n\
\ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.041307408795554966,\n\
\ \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.041307408795554966\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"\
acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7096774193548387,\n\
\ \"acc_stderr\": 0.02582210611941589,\n \"acc_norm\": 0.7096774193548387,\n\
\ \"acc_norm_stderr\": 0.02582210611941589\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175008,\n\
\ \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175008\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885417,\n\
\ \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885417\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8181818181818182,\n \"acc_stderr\": 0.027479603010538797,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.027479603010538797\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.02503387058301518,\n\
\ \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.02503387058301518\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6358974358974359,\n \"acc_stderr\": 0.02439667298509476,\n \
\ \"acc_norm\": 0.6358974358974359,\n \"acc_norm_stderr\": 0.02439667298509476\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.634453781512605,\n \"acc_stderr\": 0.031282177063684614,\n \
\ \"acc_norm\": 0.634453781512605,\n \"acc_norm_stderr\": 0.031282177063684614\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8128440366972477,\n \"acc_stderr\": 0.01672268452620016,\n \"\
acc_norm\": 0.8128440366972477,\n \"acc_norm_stderr\": 0.01672268452620016\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321617,\n \"\
acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321617\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7745098039215687,\n \"acc_stderr\": 0.029331162294251735,\n \"\
acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.029331162294251735\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \
\ \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n\
\ \"acc_stderr\": 0.030636591348699813,\n \"acc_norm\": 0.7040358744394619,\n\
\ \"acc_norm_stderr\": 0.030636591348699813\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306085,\n\
\ \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306085\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097653,\n \"\
acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097653\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094632,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094632\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n\
\ \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n\
\ \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8135376756066411,\n\
\ \"acc_stderr\": 0.013927751372001505,\n \"acc_norm\": 0.8135376756066411,\n\
\ \"acc_norm_stderr\": 0.013927751372001505\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.025190181327608408,\n\
\ \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.025190181327608408\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.28938547486033517,\n\
\ \"acc_stderr\": 0.015166544550490305,\n \"acc_norm\": 0.28938547486033517,\n\
\ \"acc_norm_stderr\": 0.015166544550490305\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n\
\ \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n\
\ \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.025006469755799208,\n\
\ \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.025006469755799208\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \
\ \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4367666232073012,\n\
\ \"acc_stderr\": 0.012667701919603664,\n \"acc_norm\": 0.4367666232073012,\n\
\ \"acc_norm_stderr\": 0.012667701919603664\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6507352941176471,\n \"acc_stderr\": 0.02895975519682487,\n\
\ \"acc_norm\": 0.6507352941176471,\n \"acc_norm_stderr\": 0.02895975519682487\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6388888888888888,\n \"acc_stderr\": 0.01943177567703731,\n \
\ \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.01943177567703731\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675606,\n\
\ \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675606\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8009950248756219,\n\
\ \"acc_stderr\": 0.028231365092758406,\n \"acc_norm\": 0.8009950248756219,\n\
\ \"acc_norm_stderr\": 0.028231365092758406\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n\
\ \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n\
\ \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3072215422276622,\n\
\ \"mc1_stderr\": 0.016150201321323006,\n \"mc2\": 0.4549937684572924,\n\
\ \"mc2_stderr\": 0.014438205280783314\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7900552486187845,\n \"acc_stderr\": 0.01144628062926263\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4116755117513268,\n \
\ \"acc_stderr\": 0.01355589744989005\n }\n}\n```"
repo_url: https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|arc:challenge|25_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|gsm8k|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hellaswag|10_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T18-07-24.670248.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-11T18-07-24.670248.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- '**/details_harness|winogrande|5_2024-03-11T18-07-24.670248.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-11T18-07-24.670248.parquet'
- config_name: results
data_files:
- split: 2024_03_11T18_07_24.670248
path:
- results_2024-03-11T18-07-24.670248.parquet
- split: latest
path:
- results_2024-03-11T18-07-24.670248.parquet
---
# Dataset Card for Evaluation run of Locutusque/NeuralHyperion-2.0-Mistral-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Locutusque/NeuralHyperion-2.0-Mistral-7B](https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Locutusque__NeuralHyperion-2.0-Mistral-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-11T18:07:24.670248](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__NeuralHyperion-2.0-Mistral-7B/blob/main/results_2024-03-11T18-07-24.670248.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6172007555487146,
"acc_stderr": 0.03265337030468996,
"acc_norm": 0.6217600201613162,
"acc_norm_stderr": 0.03331696996626112,
"mc1": 0.3072215422276622,
"mc1_stderr": 0.016150201321323006,
"mc2": 0.4549937684572924,
"mc2_stderr": 0.014438205280783314
},
"harness|arc:challenge|25": {
"acc": 0.5503412969283277,
"acc_stderr": 0.01453714444428473,
"acc_norm": 0.5776450511945392,
"acc_norm_stderr": 0.014434138713379976
},
"harness|hellaswag|10": {
"acc": 0.6139215295757817,
"acc_stderr": 0.004858539527872462,
"acc_norm": 0.8229436367257519,
"acc_norm_stderr": 0.003809362761248109
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542129,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542129
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6447368421052632,
"acc_stderr": 0.03894734487013317,
"acc_norm": 0.6447368421052632,
"acc_norm_stderr": 0.03894734487013317
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.028901593612411784,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.028901593612411784
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.03773809990686934,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.03773809990686934
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416906,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416906
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062947,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062947
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.041307408795554966,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.041307408795554966
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7096774193548387,
"acc_stderr": 0.02582210611941589,
"acc_norm": 0.7096774193548387,
"acc_norm_stderr": 0.02582210611941589
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.03453131801885417,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.03453131801885417
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.027479603010538797,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.027479603010538797
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.02503387058301518,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.02503387058301518
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6358974358974359,
"acc_stderr": 0.02439667298509476,
"acc_norm": 0.6358974358974359,
"acc_norm_stderr": 0.02439667298509476
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.02857834836547308,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.02857834836547308
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.634453781512605,
"acc_stderr": 0.031282177063684614,
"acc_norm": 0.634453781512605,
"acc_norm_stderr": 0.031282177063684614
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8128440366972477,
"acc_stderr": 0.01672268452620016,
"acc_norm": 0.8128440366972477,
"acc_norm_stderr": 0.01672268452620016
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.44907407407407407,
"acc_stderr": 0.03392238405321617,
"acc_norm": 0.44907407407407407,
"acc_norm_stderr": 0.03392238405321617
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.029331162294251735,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.029331162294251735
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7510548523206751,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.7510548523206751,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7040358744394619,
"acc_stderr": 0.030636591348699813,
"acc_norm": 0.7040358744394619,
"acc_norm_stderr": 0.030636591348699813
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306085,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306085
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097653,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097653
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094632,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094632
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8135376756066411,
"acc_stderr": 0.013927751372001505,
"acc_norm": 0.8135376756066411,
"acc_norm_stderr": 0.013927751372001505
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.025190181327608408,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.025190181327608408
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.28938547486033517,
"acc_stderr": 0.015166544550490305,
"acc_norm": 0.28938547486033517,
"acc_norm_stderr": 0.015166544550490305
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.02616058445014045,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.02616058445014045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.025006469755799208,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.025006469755799208
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303062,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4367666232073012,
"acc_stderr": 0.012667701919603664,
"acc_norm": 0.4367666232073012,
"acc_norm_stderr": 0.012667701919603664
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6507352941176471,
"acc_stderr": 0.02895975519682487,
"acc_norm": 0.6507352941176471,
"acc_norm_stderr": 0.02895975519682487
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.01943177567703731,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.01943177567703731
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675606,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675606
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8009950248756219,
"acc_stderr": 0.028231365092758406,
"acc_norm": 0.8009950248756219,
"acc_norm_stderr": 0.028231365092758406
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3072215422276622,
"mc1_stderr": 0.016150201321323006,
"mc2": 0.4549937684572924,
"mc2_stderr": 0.014438205280783314
},
"harness|winogrande|5": {
"acc": 0.7900552486187845,
"acc_stderr": 0.01144628062926263
},
"harness|gsm8k|5": {
"acc": 0.4116755117513268,
"acc_stderr": 0.01355589744989005
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nampdn-ai/tiny-orca-textbooks | ---
task_categories:
- text-generation
language:
- en
pretty_name: Tiny Orca Textbooks
size_categories:
- 100K<n<1M
license: cc-by-nc-sa-4.0
---
# Textbook-like Dataset: A Comprehensive Resource for Text-Based Skills Development in Small Language Models
This dataset is a collection of **147k synthetic textbooks** designed to enhance the text-based skills of small language models. The curriculum is meticulously structured to progress from simple to complex tasks, ensuring a gradual and effective learning experience during pretraining or finetuning SLMs.
The inspiration for this dataset comes from the technical report paper, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463). The source texts incorporated in this dataset are derived from the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) dataset, a well-known resource in the field.
Emphasizing text-based skills, this dataset serves as a practical reasoning for small language models to learn and do exercise, providing them with a diverse range of skills to learn and adapt from. The step-by-step progression mirrors the structure of a textbook, making it an ideal in-context learning sample.
### Disclaimer
While every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.
The use of the `textbook` field in this dataset is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.
## Tiny Series
Explore the possibilities and limitations of building Small Language Models with these tiny gems of data!
- [TinyStories](https://arxiv.org/abs/2305.07759): The paper that sparked my interest in the journey of the tiny-* series.
- [tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes): Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.
- [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks): 420k "things of internet" synthetic textbooks.
- [tiny-webtext](https://huggingface.co/datasets/nampdn-ai/tiny-webtext): A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.
- [tiny-lessons](https://huggingface.co/datasets/nampdn-ai/tiny-lessons): Subset of *tiny-textbooks* dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format.
- [tiny-bridgedict](https://huggingface.co/datasets/nampdn-ai/tiny-bridgedict): A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models.
### Others small HQ datasets with textbook-like quality
- [devdocs.io](https://huggingface.co/datasets/nampdn-ai/devdocs.io): FreeCodeCamp has provided 189k comprehensive API documentation across a wide range of tech stacks and programming languages.
- [sciphi-python-textbook](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook)
- [textbook_quality_programming](https://huggingface.co/datasets/vikp/textbook_quality_programming)
- [sciphi-textbooks-are-all-you-need](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-textbooks-are-all-you-need)
|
zh-tw-llm-dv-dv/zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-396867-c2048 | ---
dataset_info:
dataset_size: 3836796.0
download_size: 1208658
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- dtype: string
name: preview
splits:
- name: train
num_bytes: 3555167.0
num_examples: 500
- name: test
num_bytes: 281629.0
num_examples: 50
---
# zh-tw-llm-dev-sample-ta8k-d40d11-only_embeddings-tr_wiki_sg_alp-396867-c2048
This dataset is a part of the `zh-tw-llm-dev` project.
* Tokenizer: `zh-tw-llm-dev-tokenizer-a8k-d40d11`
* Built with: `translations`, `wikipedia`, `sharegpt`, `alpaca`
* Rows: `train` `500`, `test` `50`
* Max length: `2048`
* Full config:
```json
{"build_with": ["translations", "wikipedia", "sharegpt", "alpaca"], "preview_length": 256, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\nChinese: {lang_2}", "Chinese: {lang_2}\nEnglish: {lang_1}"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "wikipedia_settings": {"source_dataset": "zetavg/zh-tw-wikipedia-dev", "exclude": [{"content_length_longer_than": 512}, {"match": "小行星", "in": "markdown", "in_range": [0, 40]}, {"match": "是中華人民共和國", "in": "markdown", "in_range": [0, 80]}], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "sharegpt_settings": {"source_dataset": "zetavg/ShareGPT-Processed", "train_on_inputs": false, "languages": [{"en": 0.4}, "zh_Hant"], "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}, "alpaca_settings": {"source_dataset": "zetavg/traditional-chinese-alpaca-en-align", "template": "short", "train_on_inputs": false, "rows_limit": 100, "test_size": 0.1, "test_split_seed": 42, "test_rows_limit": 10}}
``` |
joey234/mmlu-anatomy-dev | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: dev
num_bytes: 2111
num_examples: 5
download_size: 0
dataset_size: 2111
---
# Dataset Card for "mmlu-anatomy-dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
eastwind/self-instruct-base | ---
license: apache-2.0
---
# Dataset Card for Dataset Name
## Dataset Description
- **Repository: [Self-Instruct](https://github.com/yizhongw/self-instruct)**
- **Paper: [Self-Instruct: Aligning Language Model with Self Generated Instructions](https://arxiv.org/abs/2212.10560)**
### Dataset Summary
This dataset is a copy of yizhongw's data from the github above, note this was created on 24th Jan 2023.
## Dataset Structure
GPT3-finetuning format (prompt + completion)
### Data Fields
Prompt
"Task: [Instruction] Output:"
Completion
"[Answer]<|endoftext|>"
### Data Splits
No splits
## Dataset Creation
### Curation Rationale
Effeciently create a large dataset by using GPT3 to generate the data
### Annotations
The dataset was made and annotated by GPT3
### Dataset Curators
yizhongw
### Licensing Information
Apache 2.0
### Citation Information
I am not the creator of this dataset, please see the GitHub link above.
|
CyberHarem/altria_pendragon_lily_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of altria_pendragon_lily/アルトリア・ペンドラゴン〔リリィ〕/阿尔托莉雅·潘德拉贡〔Lily〕 (Fate/Grand Order)
This is the dataset of altria_pendragon_lily/アルトリア・ペンドラゴン〔リリィ〕/阿尔托莉雅·潘德拉贡〔Lily〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `blonde_hair, ahoge, green_eyes, ribbon, hair_ribbon, braid, blue_ribbon, bow, hair_between_eyes, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 824.57 MiB | [Download](https://huggingface.co/datasets/CyberHarem/altria_pendragon_lily_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 708.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/altria_pendragon_lily_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1204 | 1.27 GiB | [Download](https://huggingface.co/datasets/CyberHarem/altria_pendragon_lily_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/altria_pendragon_lily_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, armored_dress, blue_dress, excalibur_(fate/stay_night), holding_sword, solo, gauntlets, juliet_sleeves, breastplate, looking_at_viewer, hair_bun, short_hair, french_braid |
| 1 | 6 |  |  |  |  |  | 1girl, armored_dress, excalibur_(fate/stay_night), gauntlets, holding_sword, looking_at_viewer, solo, blue_cape, breastplate, crown, fur_trim, french_braid, upper_body |
| 2 | 5 |  |  |  |  |  | 1girl, armored_dress, blue_cape, blue_dress, breastplate, excalibur_(fate/stay_night), fur_trim, gauntlets, holding_sword, outdoors, solo, cloudy_sky, day, looking_at_viewer, closed_mouth, crown, hair_bun, short_hair, standing |
| 3 | 26 |  |  |  |  |  | 1girl, armored_dress, bare_shoulders, detached_sleeves, gauntlets, solo, sword, hair_bow, long_hair, caliburn_(fate) |
| 4 | 11 |  |  |  |  |  | 1girl, armored_dress, bare_shoulders, detached_sleeves, gauntlets, hair_bow, lily_(flower), object_namesake, solo, white_lily, white_flower, caliburn_(fate), sword, petals, holding |
| 5 | 12 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, hair_bow, solo, holding_sword, looking_at_viewer, white_gloves, caliburn_(fate), detached_sleeves, medium_breasts, long_hair, lily_(flower), object_namesake, white_flower, white_lily, detached_collar, white_dress, black_bow, petals, smile, pantyhose, sheath |
| 6 | 5 |  |  |  |  |  | 1girl, blue_cape, closed_mouth, solo, upper_body, armored_dress, blue_cloak, breastplate, french_braid, gauntlets, looking_at_viewer, hair_bun, short_hair, simple_background, white_background, fur-trimmed_cape |
| 7 | 5 |  |  |  |  |  | 1girl, bare_shoulders, detached_sleeves, hair_bow, solo, cleavage, detached_collar, looking_at_viewer, medium_breasts, white_gloves, white_dress, white_thighhighs, smile |
| 8 | 51 |  |  |  |  |  | 1girl, solo, white_shirt, long_sleeves, looking_at_viewer, short_hair, neck_ribbon, sidelocks, blue_skirt, collared_shirt, closed_mouth, smile, simple_background, white_background, blush, single_hair_bun |
| 9 | 13 |  |  |  |  |  | 1girl, black_gloves, formal, long_hair, solo, black_necktie, collared_shirt, looking_at_viewer, long_sleeves, suit, black_jacket, simple_background, white_background, black_pants, closed_mouth, vest, adjusting_clothes, grey_shirt, wing_collar, sidelocks, upper_body |
| 10 | 5 |  |  |  |  |  | 1girl, formal, long_hair, necktie, pant_suit, solo, black_gloves, flower, weapon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | armored_dress | blue_dress | excalibur_(fate/stay_night) | holding_sword | solo | gauntlets | juliet_sleeves | breastplate | looking_at_viewer | hair_bun | short_hair | french_braid | blue_cape | crown | fur_trim | upper_body | outdoors | cloudy_sky | day | closed_mouth | standing | bare_shoulders | detached_sleeves | sword | hair_bow | long_hair | caliburn_(fate) | lily_(flower) | object_namesake | white_lily | white_flower | petals | holding | cleavage | white_gloves | medium_breasts | detached_collar | white_dress | black_bow | smile | pantyhose | sheath | blue_cloak | simple_background | white_background | fur-trimmed_cape | white_thighhighs | white_shirt | long_sleeves | neck_ribbon | sidelocks | blue_skirt | collared_shirt | blush | single_hair_bun | black_gloves | formal | black_necktie | suit | black_jacket | black_pants | vest | adjusting_clothes | grey_shirt | wing_collar | necktie | pant_suit | flower | weapon |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:----------------|:-------------|:------------------------------|:----------------|:-------|:------------|:-----------------|:--------------|:--------------------|:-----------|:-------------|:---------------|:------------|:--------|:-----------|:-------------|:-----------|:-------------|:------|:---------------|:-----------|:-----------------|:-------------------|:--------|:-----------|:------------|:------------------|:----------------|:------------------|:-------------|:---------------|:---------|:----------|:-----------|:---------------|:-----------------|:------------------|:--------------|:------------|:--------|:------------|:---------|:-------------|:--------------------|:-------------------|:-------------------|:-------------------|:--------------|:---------------|:--------------|:------------|:-------------|:-----------------|:--------|:------------------|:---------------|:---------|:----------------|:-------|:---------------|:--------------|:-------|:--------------------|:-------------|:--------------|:----------|:------------|:---------|:---------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | | X | X | X | X | | X | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | | X | X | X | X | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 26 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 11 |  |  |  |  |  | X | X | | | | X | X | | | | | | | | | | | | | | | | X | X | X | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 12 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | | | | | | | | | X | X | | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | | | X | X | | X | X | X | X | X | X | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | | | | | X | | | | X | | | | | | | | | | | | | X | X | | X | | | | | | | | | X | X | X | X | X | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 51 |  |  |  |  |  | X | | | | | X | | | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | X | X | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 9 | 13 |  |  |  |  |  | X | | | | | X | | | | X | | | | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | X | X | | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | |
| 10 | 5 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | | | | | X | X | X | X |
|
Gbssreejith/Sm_Type1_dataset_finetuned_3_White | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 48967673.0
num_examples: 216
- name: val
num_bytes: 5427570.0
num_examples: 25
download_size: 51539058
dataset_size: 54395243.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
---
|
tyzhu/squad_qa_title_v5_full_random_permute_2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 5088553.556198347
num_examples: 3365
- name: validation
num_bytes: 353148
num_examples: 300
download_size: 1278211
dataset_size: 5441701.556198347
---
# Dataset Card for "squad_qa_title_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_no_gender_distinction | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 10229
num_examples: 66
- name: test
num_bytes: 26798
num_examples: 168
- name: train
num_bytes: 350713
num_examples: 2840
download_size: 206746
dataset_size: 387740
---
# Dataset Card for "MULTI_VALUE_sst2_no_gender_distinction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dumyy/Title_CC | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 6364154
num_examples: 1846
- name: test
num_bytes: 6381296
num_examples: 1500
download_size: 1320245
dataset_size: 12745450
---
# Dataset Card for "Title_CC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZiHDeng/hf-ny8-v5 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 618878
num_examples: 1661
download_size: 20840
dataset_size: 618878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
medarc/pubmed | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 93599167012
num_examples: 2964753
download_size: 41109917186
dataset_size: 93599167012
---
# Dataset Card for "pubmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AICU/LuC4 | ---
license: creativeml-openrail-m
tags:
- LuC4
- SDXL
- LoRA
pretty_name: LuC4
task_categories:
- feature-extraction
language:
- ja
size_categories:
- 100M<n<1B
--- |
Intuit-GenSRF/toxigen-train-annotated | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 951313
num_examples: 8960
download_size: 553547
dataset_size: 951313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "toxigen-train-annotated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dog/fuego-20230214-215453-17bd4b | ---
tags:
- fuego
fuego:
id: 20230214-215453-17bd4b
status: done
script: run.py
requirements_file: requirements.txt
space_id: dog/fuego-20230214-215453-17bd4b
space_hardware: cpu-basic
---
|
jjz5463/probing_dataset_2.0 | ---
size_categories:
- n<1K
dataset_info:
features:
- name: generated sentences
dtype: string
- name: feature
dtype: string
splits:
- name: train
num_bytes: 84018
num_examples: 800
download_size: 30195
dataset_size: 84018
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
library_name: datadreamer
tags:
- datadreamer
- datadreamer-0.25.0
- synthetic
- gpt-4
---
# Dataset Card
[Add more information here](https://huggingface.co/datasets/templates/dataset-card-example)
---
This dataset was produced with [DataDreamer 🤖💤](https://datadreamer.dev). The synthetic dataset card can be found [here](datadreamer.json). |
imdatta0/ultrachat_2k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 11778581.136795517
num_examples: 2000
- name: test
num_bytes: 1472322.6420994396
num_examples: 250
download_size: 6683862
dataset_size: 13250903.778894957
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
carnival13/eng_DA_tokenized_rt5 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 155868746
num_examples: 138200
download_size: 50879179
dataset_size: 155868746
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eng_DA_tokenized_rt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-squad_v2-squad_v2-ea058a-1765461442 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jets](https://huggingface.co/Jets) for evaluating this model. |
open-llm-leaderboard/details_TFLai__Orca-Nova-13B | ---
pretty_name: Evaluation run of TFLai/Orca-Nova-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TFLai/Orca-Nova-13B](https://huggingface.co/TFLai/Orca-Nova-13B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TFLai__Orca-Nova-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-28T09:46:46.108882](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Orca-Nova-13B/blob/main/results_2023-10-28T09-46-46.108882.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003984899328859061,\n\
\ \"em_stderr\": 0.0006451805848102392,\n \"f1\": 0.07519924496644283,\n\
\ \"f1_stderr\": 0.0016030527256702374,\n \"acc\": 0.46032756632616734,\n\
\ \"acc_stderr\": 0.010706817769913408\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.003984899328859061,\n \"em_stderr\": 0.0006451805848102392,\n\
\ \"f1\": 0.07519924496644283,\n \"f1_stderr\": 0.0016030527256702374\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14480667172100076,\n \
\ \"acc_stderr\": 0.009693234799052708\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774108\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TFLai/Orca-Nova-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|arc:challenge|25_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_28T09_46_46.108882
path:
- '**/details_harness|drop|3_2023-10-28T09-46-46.108882.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-28T09-46-46.108882.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_28T09_46_46.108882
path:
- '**/details_harness|gsm8k|5_2023-10-28T09-46-46.108882.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-28T09-46-46.108882.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hellaswag|10_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-05-50.844177.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-12T12-05-50.844177.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-12T12-05-50.844177.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_28T09_46_46.108882
path:
- '**/details_harness|winogrande|5_2023-10-28T09-46-46.108882.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-28T09-46-46.108882.parquet'
- config_name: results
data_files:
- split: 2023_09_12T12_05_50.844177
path:
- results_2023-09-12T12-05-50.844177.parquet
- split: 2023_10_28T09_46_46.108882
path:
- results_2023-10-28T09-46-46.108882.parquet
- split: latest
path:
- results_2023-10-28T09-46-46.108882.parquet
---
# Dataset Card for Evaluation run of TFLai/Orca-Nova-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TFLai/Orca-Nova-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TFLai/Orca-Nova-13B](https://huggingface.co/TFLai/Orca-Nova-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TFLai__Orca-Nova-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T09:46:46.108882](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Orca-Nova-13B/blob/main/results_2023-10-28T09-46-46.108882.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.003984899328859061,
"em_stderr": 0.0006451805848102392,
"f1": 0.07519924496644283,
"f1_stderr": 0.0016030527256702374,
"acc": 0.46032756632616734,
"acc_stderr": 0.010706817769913408
},
"harness|drop|3": {
"em": 0.003984899328859061,
"em_stderr": 0.0006451805848102392,
"f1": 0.07519924496644283,
"f1_stderr": 0.0016030527256702374
},
"harness|gsm8k|5": {
"acc": 0.14480667172100076,
"acc_stderr": 0.009693234799052708
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774108
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
PROCESOS/id_reversoAntiguo | ---
license: c-uda
---
|
rootstrap-org/books-ratings | ---
license: mit
language:
- en
pretty_name: Book ratings data
---
## Book ratings
This dataset has two files:
* [Books_rating.csv](https://huggingface.co/datasets/rootstrap-org/books-ratings/blob/main/Books_rating.csv) --> With information about books ratings made by users
* [books_data.csv](https://huggingface.co/datasets/rootstrap-org/books-ratings/blob/main/books_data.csv) --> Metadata about the books, title, author, genre, etc.
It is intended as an input dataset to train a recommender system.
It was obtained from [this dataset of Amazon book reviews](https://www.kaggle.com/datasets/mohamedbakhet/amazon-books-reviews) in Kaggle |
MazzzyStar/riddles_evolved | ---
dataset_info:
features:
- name: number
dtype: int64
- name: messages
sequence: string
splits:
- name: train
num_bytes: 365739
num_examples: 281
download_size: 195536
dataset_size: 365739
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "riddles_evolved"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
universalner/uner_llm_inst_swedish | ---
license: cc-by-sa-4.0
language:
- sv
task_categories:
- token-classification
dataset_info:
- config_name: sv_pud
splits:
- name: test
num_examples: 999
- config_name: sv_talbanken
splits:
- name: test
num_examples: 1218
- name: dev
num_examples: 503
- name: train
num_examples: 4302
---
# Dataset Card for Universal NER v1 in the Aya format - Swedish subset
This dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ise-uiuc/Magicoder-Evol-Instruct-110K | ---
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- conversational
---
A decontaminated version of [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1). Decontamination is done in the same way as StarCoder ([bigcode decontamination process](https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination)). |
haris001/RAG_DS_PDF | ---
license: apache-2.0
---
|
Yerson2277/dataset-m | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 492388
num_examples: 807
download_size: 234347
dataset_size: 492388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Parleatacoeur/leyesperuanas | ---
language:
- es
tags:
- legal
--- |
rai-sandeep/test_ds_text_1 | ---
dataset_info:
features:
- name: category
dtype: string
- name: topic
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 4145
num_examples: 4
download_size: 10708
dataset_size: 4145
---
# Dataset Card for "test_ds_text_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/vladimir-vysotsky | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/vladimir-vysotsky"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.124261 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/18735fe10bace7b3f615b2da9c95ac73.938x938x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/vladimir-vysotsky">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Владимир Высоцкий (Vladimir Vysotsky)</div>
<a href="https://genius.com/artists/vladimir-vysotsky">
<div style="text-align: center; font-size: 14px;">@vladimir-vysotsky</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/vladimir-vysotsky).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/vladimir-vysotsky")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|47| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/vladimir-vysotsky")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
CyberHarem/victorious_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of victorious (Kantai Collection)
This is the dataset of victorious (Kantai Collection), containing 116 images and their tags.
The core tags of this character are `blonde_hair, blue_eyes, breasts, long_hair, medium_breasts, hair_between_eyes, hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 116 | 131.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/victorious_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 116 | 78.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/victorious_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 260 | 159.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/victorious_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 116 | 115.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/victorious_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 260 | 219.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/victorious_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/victorious_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 18 |  |  |  |  |  | 1girl, red_rose, solo, tiara, military_uniform, upper_body, green_jacket, white_background, simple_background, white_shirt, black_gloves, closed_mouth, long_sleeves, smile, cropped_jacket, looking_at_viewer, one-hour_drawing_challenge |
| 1 | 5 |  |  |  |  |  | 1girl, arrow_(projectile), black_gloves, black_pantyhose, cropped_jacket, flight_deck, green_jacket, long_sleeves, machinery, overskirt, quiver, rigging, solo, tiara, holding_bow_(weapon), red_rose, closed_mouth, dress_shirt, pelvic_curtain, white_shirt, bangs, military_uniform, ocean, tachi-e, transparent_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | red_rose | solo | tiara | military_uniform | upper_body | green_jacket | white_background | simple_background | white_shirt | black_gloves | closed_mouth | long_sleeves | smile | cropped_jacket | looking_at_viewer | one-hour_drawing_challenge | arrow_(projectile) | black_pantyhose | flight_deck | machinery | overskirt | quiver | rigging | holding_bow_(weapon) | dress_shirt | pelvic_curtain | bangs | ocean | tachi-e | transparent_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:-------|:--------|:-------------------|:-------------|:---------------|:-------------------|:--------------------|:--------------|:---------------|:---------------|:---------------|:--------|:-----------------|:--------------------|:-----------------------------|:---------------------|:------------------|:--------------|:------------|:------------|:---------|:----------|:-----------------------|:--------------|:-----------------|:--------|:--------|:----------|:-------------------------|
| 0 | 18 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | | X | X | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.