datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
Navarro20/Ferb | ---
license: openrail
---
|
Ahaa1234/SMPDATASET | ---
license: mit
---
|
abrhaleitela/Sentiment-Analysis-for-Tigrinya | ---
license: apache-2.0
---
First sentiment analysis (with two classes) dataset for Tigrinya language. This train set is constructed automatically while the test set labeled manually.
1. Negative sentiment labeled as 0.
2. Positive sentiment labeled as 1.
train size: 49374 (25031 negative and 24343 positive)
test size: 4000 (2000 negative and 2000 positive)
For more information on our experiments and results please check our paper:
Title: Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya
Link: https://arxiv.org/pdf/2006.07698.pdf
Authors: Abrhalei Frezghi Tela, Abraham Woubie,Ville Hautamaki
Please consider citing the paper.
@misc{tela2020transferring,
title={Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya},
author={Abrhalei Tela and Abraham Woubie and Ville Hautamaki},
year={2020},
eprint={2006.07698},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
adamxyang/1.4b-policy_preference_data_gold_labelled_noisy_0.1 | ---
dataset_info:
features:
- name: answers
sequence: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: preference
dtype: int64
splits:
- name: train
num_bytes: 27875579
num_examples: 49383
- name: validation
num_bytes: 1139961
num_examples: 2000
download_size: 15731877
dataset_size: 29015540
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
LuisLenin/DataClinicalv1 | ---
license: openrail
task_categories:
- token-classification
pretty_name: DataClinicalv1
size_categories:
- n<1K
--- |
CyberHarem/nightingale_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nightingale/ナイチンゲール/夜莺 (Arknights)
This is the dataset of nightingale/ナイチンゲール/夜莺 (Arknights), containing 321 images and their tags.
The core tags of this character are `long_hair, horns, blonde_hair, blue_eyes, very_long_hair, hat, white_headwear, blunt_bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 321 | 599.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nightingale_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 321 | 499.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nightingale_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 765 | 928.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nightingale_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nightingale_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, bare_shoulders, long_sleeves, solo, white_dress, holding_staff, horns_through_headwear, looking_at_viewer, simple_background, parted_lips, sleeveless, sweater, upper_body, white_background, birdcage, turtleneck |
| 1 | 6 |  |  |  |  |  | 1girl, holding_staff, long_sleeves, looking_at_viewer, solo, white_dress, bare_shoulders, closed_mouth, cowboy_shot, horns_through_headwear, off_shoulder |
| 2 | 9 |  |  |  |  |  | 1girl, bare_shoulders, long_sleeves, solo, white_dress, holding_staff, white_footwear, looking_at_viewer, sitting, birdcage, full_body, knee_boots, black_ribbon |
| 3 | 5 |  |  |  |  |  | 1girl, horns_through_headwear, long_sleeves, sitting, solo, white_dress, bare_shoulders, birdcage, looking_at_viewer, parted_lips, demon_horns, off_shoulder, sweater |
| 4 | 6 |  |  |  |  |  | 1girl, black_dress, bonnet, long_sleeves, looking_at_viewer, official_alternate_costume, solo, black_headwear, frills, holding_staff, parted_lips, wide_sleeves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | long_sleeves | solo | white_dress | holding_staff | horns_through_headwear | looking_at_viewer | simple_background | parted_lips | sleeveless | sweater | upper_body | white_background | birdcage | turtleneck | closed_mouth | cowboy_shot | off_shoulder | white_footwear | sitting | full_body | knee_boots | black_ribbon | demon_horns | black_dress | bonnet | official_alternate_costume | black_headwear | frills | wide_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:-------|:--------------|:----------------|:-------------------------|:--------------------|:--------------------|:--------------|:-------------|:----------|:-------------|:-------------------|:-----------|:-------------|:---------------|:--------------|:---------------|:-----------------|:----------|:------------|:-------------|:---------------|:--------------|:--------------|:---------|:-----------------------------|:-----------------|:---------|:---------------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | X | X | X | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | X | X | X | | X | | | | | | | X | | | | | X | X | X | X | X | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | | X | | X | | | X | | | | X | | X | | | | X | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | X | X | | X | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X |
|
Roderich/Elsa_prueba | ---
license: other
---
|
asgaardlab/GPT-4V-DescribeChangesCutscene | ---
dataset_info:
features:
- name: image_2
dtype: image
- name: image_1
dtype: image
- name: prompt
dtype: string
- name: gpt-4v-response
sequence: string
- name: image_2_width
dtype: int64
- name: image_2_height
dtype: int64
- name: image_1_width
dtype: int64
- name: image_1_height
dtype: int64
- name: header_text
dtype: string
- name: projection_id
dtype: string
splits:
- name: validation
num_bytes: 52762891.0
num_examples: 218
download_size: 39843787
dataset_size: 52762891.0
---
# Dataset Card for "GPT-4V-DescribeChangesCutscene"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arjunssat/rfp_train_v1 | ---
license: apache-2.0
---
|
oyxy2019/GPTTokenizer_THUCNews_10000_to_lm_datasets | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 134080000
num_examples: 80000
- name: validation
num_bytes: 13408000
num_examples: 8000
- name: test
num_bytes: 1340800
num_examples: 800
download_size: 24032981
dataset_size: 148828800
---
# Dataset Card for "GPTTokenizer_THUCNews_10000_to_lm_datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/sessyoin_kiara_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sessyoin_kiara/殺生院キアラ/杀生院祈荒 (Fate/Grand Order)
This is the dataset of sessyoin_kiara/殺生院キアラ/杀生院祈荒 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `yellow_eyes, breasts, long_hair, black_hair, large_breasts, facial_mark, parted_bangs, very_long_hair, multicolored_hair, pink_hair, streaked_hair, wavy_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 963.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sessyoin_kiara_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 815.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sessyoin_kiara_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1295 | 1.51 GiB | [Download](https://huggingface.co/datasets/CyberHarem/sessyoin_kiara_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sessyoin_kiara_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1boy, 1girl, blush, forehead_mark, hetero, nude, penis, sweat, looking_at_viewer, open_mouth, spread_legs, mosaic_censoring, navel, nipples, sex, smile, thighhighs, thighs, vaginal, chest_tattoo, collarbone, cum_in_pussy, on_back, solo_focus |
| 1 | 5 |  |  |  |  |  | 1girl, blush, forehead_mark, hetero, horns, nipples, solo_focus, thighhighs, vaginal, bare_shoulders, mosaic_censoring, open_mouth, panties_aside, penis, smile, sweat, veil, 1boy, cowgirl_position, detached_sleeves, girl_on_top, navel, spread_legs, thighs, collarbone, cum_in_pussy, group_sex, handjob, lace_trim, licking_lips, long_sleeves, looking_at_viewer, multiple_boys, revealing_clothes, tattoo, wide_sleeves |
| 2 | 15 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, detached_sleeves, forehead_mark, solo, veil, collarbone, smile, looking_at_viewer, pink_thighhighs, wide_sleeves, long_sleeves, thighs, navel, sideboob, revealing_clothes, tattoo, blush, pink_panties, sitting, horn_ornament, parted_lips |
| 3 | 9 |  |  |  |  |  | 1girl, bare_shoulders, chest_tattoo, cleavage, forehead_mark, looking_at_viewer, smile, solo, upper_body, veil, collarbone, detached_sleeves, parted_lips, horn_ornament, pink_ribbon, wide_sleeves, long_sleeves, simple_background |
| 4 | 5 |  |  |  |  |  | 1girl, bare_shoulders, cleavage, detached_sleeves, forehead_mark, looking_at_viewer, smile, solo, veil, wide_sleeves, horn_ornament, long_sleeves, revealing_clothes, tattoo, horn_ring, parted_lips, sideboob, collarbone, navel |
| 5 | 26 |  |  |  |  |  | 1girl, chest_tattoo, forehead_mark, hair_flower, pearl_necklace, solo, twin_braids, bare_shoulders, looking_at_viewer, shell_necklace, smile, cleavage, white_bikini, pearl_bracelet, thighs, blush, navel, sarong, collarbone, earrings |
| 6 | 26 |  |  |  |  |  | 1girl, habit, smile, solo, looking_at_viewer, nun, necklace, long_sleeves, black_dress, wide_sleeves, closed_mouth, simple_background, thighhighs, side_slit |
| 7 | 18 |  |  |  |  |  | bare_shoulders, forehead_mark, looking_at_viewer, 1girl, double_bun, blush, solo, thighs, smile, white_one-piece_swimsuit, small_breasts, chest_tattoo, closed_mouth, white_background, collarbone, dress_swimsuit |
| 8 | 30 |  |  |  |  |  | 1girl, bare_shoulders, black_bikini, black_gloves, black_headwear, cleavage, fingerless_gloves, forehead_mark, halterneck, pink_necktie, police_hat, solo, pantyhose, ponytail, looking_at_viewer, covered_navel, smile, thighs, peaked_cap, torn_clothes, blush, bodystocking, armband |
| 9 | 9 |  |  |  |  |  | 1girl, long_sleeves, turtleneck_sweater, looking_at_viewer, ribbed_sweater, smile, solo, black_jacket, forehead_mark, holding_book, necklace, off_shoulder, sidelocks, closed_mouth, full_body, standing, white_pants |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1boy | 1girl | blush | forehead_mark | hetero | nude | penis | sweat | looking_at_viewer | open_mouth | spread_legs | mosaic_censoring | navel | nipples | sex | smile | thighhighs | thighs | vaginal | chest_tattoo | collarbone | cum_in_pussy | on_back | solo_focus | horns | bare_shoulders | panties_aside | veil | cowgirl_position | detached_sleeves | girl_on_top | group_sex | handjob | lace_trim | licking_lips | long_sleeves | multiple_boys | revealing_clothes | tattoo | wide_sleeves | cleavage | solo | pink_thighhighs | sideboob | pink_panties | sitting | horn_ornament | parted_lips | upper_body | pink_ribbon | simple_background | horn_ring | hair_flower | pearl_necklace | twin_braids | shell_necklace | white_bikini | pearl_bracelet | sarong | earrings | habit | nun | necklace | black_dress | closed_mouth | side_slit | double_bun | white_one-piece_swimsuit | small_breasts | white_background | dress_swimsuit | black_bikini | black_gloves | black_headwear | fingerless_gloves | halterneck | pink_necktie | police_hat | pantyhose | ponytail | covered_navel | peaked_cap | torn_clothes | bodystocking | armband | turtleneck_sweater | ribbed_sweater | black_jacket | holding_book | off_shoulder | sidelocks | full_body | standing | white_pants |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------|:--------|:--------|:----------------|:---------|:-------|:--------|:--------|:--------------------|:-------------|:--------------|:-------------------|:--------|:----------|:------|:--------|:-------------|:---------|:----------|:---------------|:-------------|:---------------|:----------|:-------------|:--------|:-----------------|:----------------|:-------|:-------------------|:-------------------|:--------------|:------------|:----------|:------------|:---------------|:---------------|:----------------|:--------------------|:---------|:---------------|:-----------|:-------|:------------------|:-----------|:---------------|:----------|:----------------|:--------------|:-------------|:--------------|:--------------------|:------------|:--------------|:-----------------|:--------------|:-----------------|:---------------|:-----------------|:---------|:-----------|:--------|:------|:-----------|:--------------|:---------------|:------------|:-------------|:---------------------------|:----------------|:-------------------|:-----------------|:---------------|:---------------|:-----------------|:--------------------|:-------------|:---------------|:-------------|:------------|:-----------|:----------------|:-------------|:---------------|:---------------|:----------|:---------------------|:-----------------|:---------------|:---------------|:---------------|:------------|:------------|:-----------|:--------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | X | X | X | | X | X | X | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | | X | X | X | | | | | X | | | | X | | | X | | X | | | X | | | | | X | | X | | X | | | | | | X | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | | X | | X | | | | | X | | | | | | | X | | | | X | X | | | | | X | | X | | X | | | | | | X | | | | X | X | X | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | | X | | X | | | | | X | | | | X | | | X | | | | | X | | | | | X | | X | | X | | | | | | X | | X | X | X | X | X | | X | | | X | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 26 |  |  |  |  |  | | X | X | X | | | | | X | | | | X | | | X | | X | | X | X | | | | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 26 |  |  |  |  |  | | X | | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | | | X | | | | X | | X | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 18 |  |  |  |  |  | | X | X | X | | | | | X | | | | | | | X | | X | | X | X | | | | | X | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 30 |  |  |  |  |  | | X | X | X | | | | | X | | | | | | | X | | X | | | | | | | | X | | | | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 9 | 9 |  |  |  |  |  | | X | | X | | | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
|
projecte-aina/ancora-ca-ner | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: ancora-ca-ner
size_categories:
- unknown
source_datasets: []
task_categories: []
task_ids: []
---
# Dataset Card for AnCora-Ca-NER
## Dataset Description
- **Website:** https://zenodo.org/records/4761746
- **Homepage** [Projecte AINA](https://projecteaina.cat/tech/)
- **Repository** [HuggingFace](https://huggingface.co/projecte-aina)
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Paper:** [AnCora: Multilevel Annotated Corpora for Catalan and Spanish](http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf)
- **Point of Contact** langtech@bsc.es
### Dataset Summary
This is a dataset for Named Entity Recognition (NER) in Catalan. It adapts <a href="http://clic.ub.edu/corpus/">AnCora corpus</a> for Machine Learning and Language Model evaluation purposes.
This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of the [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
Named Entities Recognition, Language Model
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
### Data Instances
Three two-column files, one for each split.
<pre>
Fundació B-ORG
Privada I-ORG
Fira I-ORG
de I-ORG
Manresa I-ORG
ha O
fet O
un O
balanç O
de O
l' O
activitat O
del O
Palau B-LOC
Firal I-LOC
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
### Data Splits
We took the original train, dev and test splits from the [UD version of the corpus](https://huggingface.co/datasets/universal_dependencies)
- train: 10,630 examples
- validation: 1,429 examples
- test: 1,528 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
#### Initial Data Collection and Normalization
[AnCora](http://clic.ub.edu/corpus/) consists of a CatCAalan corpus (AnCora-CA) and a Spanish corpus (AnCora-ES), each of them of 500,000 tokens (some multi-word). The corpora are annotated for linguistic phenomena at different levels.
AnCora corpus is mainly based on newswire texts. For more information, refer to Taulé, M., M.A. Martí, M. Recasens (2009): <a href="http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf">"AnCora: Multilevel Annotated Corpora for Catalan and Spanish”</a>, Proceedings of 6th International Conference on language Resources and Evaluation.
#### Who are the source language producers?
Catalan [AnCora corpus](http://clic.ub.edu/corpus/) is compiled from articles from the following news outlets: <a href="https://www.efe.com">EFE</a>, <a href="https://www.acn.cat">ACN</a>, <a href="https://www.elperiodico.cat/ca/">El Periodico</a>.
### Annotations
#### Annotation process
We adapted the NER labels from [AnCora corpus](http://clic.ub.edu/corpus/) to a token-per-line, multi-column format.
#### Who are the annotators?
Original annotators from [AnCora corpus](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
### Licensing information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
[AnCora corpus](http://clic.ub.edu/corpus/) is used under [CC-by](https://creativecommons.org/licenses/by/4.0/) licence.
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[](https://doi.org/10.5281/zenodo.4761746)
### Contributions
[N/A] |
result-kand2-sdxl-wuerst-karlo/53284ebf | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 191
num_examples: 10
download_size: 1401
dataset_size: 191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "53284ebf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/astraea_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of astraea/アストライア/阿斯特赖亚 (Fate/Grand Order)
This is the dataset of astraea/アストライア/阿斯特赖亚 (Fate/Grand Order), containing 46 images and their tags.
The core tags of this character are `long_hair, blonde_hair, drill_hair, yellow_eyes, multicolored_hair, breasts, blue_hair, gradient_hair, hair_ornament, large_breasts, very_long_hair, big_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 46 | 83.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/astraea_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 46 | 70.86 MiB | [Download](https://huggingface.co/datasets/CyberHarem/astraea_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 110 | 124.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/astraea_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/astraea_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, solo, looking_at_viewer, smile, cleavage, flower, bare_shoulders, blue_dress, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | smile | cleavage | flower | bare_shoulders | blue_dress | open_mouth |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:-----------|:---------|:-----------------|:-------------|:-------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
MicPie/unpredictable_cluster18 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-cluster18
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-cluster18" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
|
hughlan1214/SER | ---
license: unknown
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype: audio
- name: emotion
dtype: string
- name: dataset
dtype: string
splits:
- name: train
num_bytes: 4484334276.16
num_examples: 11970
download_size: 1315896899
dataset_size: 4484334276.16
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is created from [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset.
This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels.
It includes the 7 types of emotions contained in speech.
```python
emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']
```
|
eugenesiow/Urban100 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: Urban100
tags:
- other-image-super-resolution
---
# Dataset Card for Urban100
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/jbhuang0604/SelfExSR
- **Repository**: https://huggingface.co/datasets/eugenesiow/Urban100
- **Paper**: https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The Urban100 dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models. It was first published by [Huang et al. (2015)](https://openaccess.thecvf.com/content_cvpr_2015/html/Huang_Single_Image_Super-Resolution_2015_CVPR_paper.html) in the paper "Single Image Super-Resolution From Transformed Self-Exemplars".
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/Urban100', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_HR/img_001.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/Urban100_LR_x2/img_001.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|
|-------|---:|
|bicubic_x2|100|
|bicubic_x3|100|
|bicubic_x4|100|
## Dataset Creation
### Curation Rationale
The authors have created Urban100 containing 100 HR images with a variety of real-world structures.
### Source Data
#### Initial Data Collection and Normalization
The authors constructed this dataset using images from Flickr (under CC license) using keywords such as urban, city, architecture, and structure.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Huang et al. (2015)](https://github.com/jbhuang0604/SelfExSR)
### Licensing Information
The dataset provided uses images from Flikr under the CC (CC-BY-4.0) license.
### Citation Information
```bibtex
@InProceedings{Huang_2015_CVPR,
author = {Huang, Jia-Bin and Singh, Abhishek and Ahuja, Narendra},
title = {Single Image Super-Resolution From Transformed Self-Exemplars},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
|
syzym/xbmu_amdo31 | ---
pretty_name: XBMU-AMDO31
annotations_creators:
- expert-generated
language_creatosr:
- expert-generated
language:
- tib
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
---
# Dataset Card for [XBMU-AMDO31]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**https://github.com/sendream/xbmu_amdo31
- **Repository:**https://github.com/sendream/xbmu_amdo31
- **Paper:**
- **Leaderboard:**https://github.com/sendream/xbmu_amdo31#leaderboard
- **Point of Contact:**[xxlgy@xbmu.edu.cn](mailto:xxlgy@xbmu.edu.cn)
### Dataset Summary
XBMU-AMDO31 dataset is a speech recognition corpus of Amdo Tibetan dialect. The open source corpus contains 31 hours of speech data and resources related to build speech recognition systems, including transcribed texts and a Tibetan pronunciation dictionary.
### Supported Tasks and Leaderboards
automatic-speech-recognition: The dataset can be used to train a model for Amdo Tibetan Automatic Speech Recognition (ASR). It was recorded by 66 native speakers of Amdo Tibetan, and the recorded audio was processed and manually inspected. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://github.com/sendream/xbmu_amdo31#leaderboard and ranks models based on their WER.
### Languages
XBMU-AMDO31 contains audio, a Tibetan pronunciation dictionary and transcription data in Amdo Tibetan.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset has three splits: train, evaluation (dev) and test.Each speaker had approximately 450 sentences, with a small number of individuals having fewer than 200 sen
| Subset | Hours | Male | Female | Remarks |
| ------ | ----- | ---- | ------ | --------------------------------------- |
| Train | 25.41 | 27 | 27 | 18539 sentences recorded by 54 speakers |
| Dev | 2.81 | 2 | 4 | 2050 sentences recorded by 6 speakers |
| Test | 2.85 | 3 | 3 | 2041 sentences recorded by 6 speakers |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset is distributed under CC BY-SA 4.0.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@speechless-z](https://github.com/speechless-z) for adding this dataset. |
YuehHanChen/TRAIN_mistral_7b_bs_prediction | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3299961
num_examples: 1478
download_size: 0
dataset_size: 3299961
---
# Dataset Card for "TRAIN_mistral_7b_bs_prediction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvations/c4p0-x1-en-ja | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 49764
num_examples: 42
download_size: 37636
dataset_size: 49764
configs:
- config_name: default
data_files:
- split: train
path: 66034f82c5c65ae4/train-*
---
|
popaqy/en-bg | ---
dataset_info:
features:
- name: bg
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 175001915
num_examples: 408290
download_size: 82909795
dataset_size: 175001915
---
# Dataset Card for "en-bg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
smaciu/bee-wings-small | ---
task_categories:
- feature-extraction
pretty_name: >-
Collection of wing images for conservation of honey bees (Apis mellifera)
biodiversity in Europe
size_categories:
- 1K<n<10K
license: afl-3.0
---
Collection of wing images for conservation of honey bees (Apis mellifera) biodiversity in Europe
https://zenodo.org/record/7244070
Small version (10%) of the original dataset bee-wings-large |
Nexdata/60321_Changsha_Dialect_Pronunciation_Dictionary | ---
license: cc-by-nc-nd-4.0
---
## Description
Changsha dialect pronunciation dictionary contains more than 60,000 entries, each of which contains words and international phonetic symbols. Produced by Changsha dialect linguistics experts with accurate phonetic transcription. Applicable to the research and development of Changsha dialect recognition technology.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1013?source=Huggingface
# Specifications
## Format
txt
## Content
60,321 Changsha dialect words and corresponding IPA phonetic symbols
## Language
Changsha Dialect
## Application scenarios
speech recognition
# Licensing Information
Commercial License
|
alejandrowallace/tmdb-5000 | ---
license: unknown
task_categories:
- zero-shot-classification
language:
- en
size_categories:
- 1K<n<10K
--- |
stanford-crfm/i2s-webpage | ---
dataset_info:
- config_name: css
features:
- name: structure
dtype: string
- name: text
dtype: string
- name: image
dtype: image
- name: download_url
dtype: string
- name: instance_name
dtype: string
- name: date
dtype: string
- name: additional_info
dtype: string
- name: date_scrapped
dtype: string
- name: file_filters
dtype: string
- name: compilation_info
dtype: string
- name: rendering_filters
dtype: string
- name: assets
sequence: string
- name: category
dtype: string
- name: uuid
dtype: string
- name: length
dtype: string
- name: difficulty
dtype: string
splits:
- name: validation
num_bytes: 815105541.0
num_examples: 300
download_size: 809865478
dataset_size: 815105541.0
- config_name: html
features:
- name: structure
dtype: string
- name: text
dtype: string
- name: image
dtype: image
- name: download_url
dtype: string
- name: instance_name
dtype: string
- name: date
dtype: string
- name: additional_info
dtype: string
- name: date_scrapped
dtype: string
- name: file_filters
dtype: string
- name: compilation_info
dtype: string
- name: rendering_filters
dtype: string
- name: assets
sequence: string
- name: category
dtype: string
- name: uuid
dtype: string
- name: length
dtype: string
- name: difficulty
dtype: string
splits:
- name: validation
num_bytes: 263470560.0
num_examples: 300
download_size: 257833986
dataset_size: 263470560.0
- config_name: javascript
features:
- name: structure
dtype: string
- name: text
dtype: string
- name: image
dtype: image
- name: download_url
dtype: string
- name: instance_name
dtype: string
- name: date
dtype: string
- name: additional_info
dtype: string
- name: date_scrapped
dtype: string
- name: file_filters
dtype: string
- name: compilation_info
dtype: string
- name: rendering_filters
dtype: string
- name: assets
sequence: string
- name: category
dtype: string
- name: uuid
dtype: string
- name: length
dtype: string
- name: difficulty
dtype: string
splits:
- name: validation
num_bytes: 279510653.0
num_examples: 300
download_size: 273214540
dataset_size: 279510653.0
configs:
- config_name: css
data_files:
- split: validation
path: css/validation-*
- config_name: html
data_files:
- split: validation
path: html/validation-*
- config_name: javascript
data_files:
- split: validation
path: javascript/validation-*
---
|
juege/wlop | ---
license: openrail
---
|
zolak/twitter_dataset_79_1713114857 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 416650
num_examples: 1023
download_size: 216180
dataset_size: 416650
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gzguevara/test3 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: mask_0
dtype: image
- name: mask_1
dtype: image
- name: mask_2
dtype: image
- name: mask_3
dtype: image
- name: mask_4
dtype: image
splits:
- name: train
num_bytes: 3817521.0
num_examples: 7
- name: test
num_bytes: 1781388.0
num_examples: 4
download_size: 5733535
dataset_size: 5598909.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Parth/topic_modelling_dataset | ---
license: apache-2.0
---
|
Lam-ia/Wikipedia-Euskera | ---
license: apache-2.0
task_categories:
- text-generation
language:
- eu
pretty_name: tiny-basque-demo
size_categories:
- n<1K
--- |
breadlicker45/yahoo_answers_v2 | ---
license: mit
---
train-eval-index:
- config: default
task: token-classification
task_id: entity_extraction
splits:
eval_split: test
col_mapping:
tokens: tokens
labels: tags |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7fc5e3e9 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1338
dataset_size: 184
---
# Dataset Card for "7fc5e3e9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Wiktor123321/jazz_songs_embeddings | ---
dataset_info:
features:
- name: name
dtype: string
- name: embedding
dtype:
array2_d:
shape:
- 4
dtype: int16
splits:
- name: train
num_bytes: 286779829
num_examples: 1359
download_size: 132474270
dataset_size: 286779829
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hackathon-pln-es/Axolotl-Spanish-Nahuatl | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- es
license:
- mpl-2.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: "Axolotl Spanish-Nahuatl parallel corpus , is a digital corpus that compiles\
\ several sources with parallel content in these two languages. \n\nA parallel corpus\
\ is a type of corpus that contains texts in a source language with their correspondent\
\ translation in one or more target languages. Gutierrez-Vasques, X., Sierra, G.,\
\ and Pompa, I. H. (2016). Axolotl: a web accessible parallel corpus for spanish-nahuatl.\
\ In Proceedings of the Ninth International Conference on Language Resources and\
\ Evaluation (LREC 2016), Portoro, Slovenia. European Language Resources Association\
\ (ELRA). Grupo de Ingenieria Linguistica (GIL, UNAM). Corpus paralelo español-nahuatl.\
\ http://www.corpus.unam.mx/axolotl."
language_bcp47:
- es-MX
tags:
- conditional-text-generation
---
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/article/10.1007/s10579-014-9287-y
- **Repository:1** https://github.com/ElotlMX/py-elotl
- **Repository:2** https://github.com/christos-c/bible-corpus/blob/master/bibles/Nahuatl-NT.xml
- **Paper:** https://aclanthology.org/N15-2021.pdf
## Dataset Collection
In order to get a good translator, we collected and cleaned two of the most complete Nahuatl-Spanish parallel corpora available. Those are Axolotl collected by an expert team at UNAM and Bible UEDIN Nahuatl Spanish crawled by Christos Christodoulopoulos and Mark Steedman from Bible Gateway site.
After this, we ended with 12,207 samples from Axolotl due to misalignments and duplicated texts in Spanish in both original and nahuatl columns and 7,821 samples from Bible UEDIN for a total of 20028 utterances.
## Team members
- Emilio Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martínez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
## Applications
- MODEL: Spanish Nahuatl Translation Task with a T5 model in ([t5-small-spanish-nahuatl](https://huggingface.co/hackathon-pln-es/t5-small-spanish-nahuatl))
- DEMO: Spanish Nahuatl Translation in ([Spanish-nahuatl](https://huggingface.co/spaces/hackathon-pln-es/Spanish-Nahuatl-Translation)) |
AdapterOcean/Open_Platypus_standardized_cluster_7_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1953287
num_examples: 672
download_size: 899279
dataset_size: 1953287
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Open_Platypus_standardized_cluster_7_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yotam/economics-textbook | ---
license: cc-by-4.0
---
|
goodfellowliu/BSDS100 | ---
license: openrail
---
|
nlpservicebots/operations | ---
license: apache-2.0
task_categories:
- text-classification
language:
- es
size_categories:
- 1K<n<10K
--- |
linhqyy/test_data_normalized | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: train
num_bytes: 2100079911.832
num_examples: 1299
download_size: 1881493177
dataset_size: 2100079911.832
---
# Dataset Card for "test_data_normalized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
the-french-artist/wikipedia_20220301.simple_sentence_split_text_has_at_least_5_words | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: sentence_index
dtype: int64
- name: line_index
dtype: int64
splits:
- name: train
num_bytes: 364913841.5102045
num_examples: 2258728
download_size: 137528826
dataset_size: 364913841.5102045
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vermouth/TweetQA_squad_format | ---
task_categories:
- question-answering
--- |
iloraishaque/llm-bronte | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 1032696.0
num_examples: 126
- name: test
num_bytes: 122940.0
num_examples: 15
download_size: 558167
dataset_size: 1155636.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
male-2/0.2.0-public | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_intent
dtype: string
- name: rejected_intent
dtype: string
- name: data_from
dtype: string
- name: is_synthesized
dtype: bool
splits:
- name: train
num_bytes: 674
num_examples: 1
download_size: 7668
dataset_size: 674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gguichard/wsd_myriade_synth_data_id_label_pc2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: wn_sens
sequence: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: float64
splits:
- name: train
num_bytes: 51147806.48548672
num_examples: 91188
- name: test
num_bytes: 5683650.514513279
num_examples: 10133
download_size: 14318575
dataset_size: 56831457.0
---
# Dataset Card for "wsd_myriade_synth_data_id_label_pc2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
syntaxnoob/weather-prediction-prototype-aws | ---
license: unlicense
size_categories:
- 100K<n<1M
---
# Weather prediction prototype database.
This database was made using data provided by KMI.
This database will only be used to train a prototype.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **KMI**
## Dataset Structure
Normalized columns:
- timestamp
- air_pressure
- relative_humidity
- precipitation
- wind_speed
- wind_direction
More information about these columns can be found in the `information_10min.txt` file.
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
liupf/SLM4CRP_with_RTs | ---
license: mit
task_categories:
- text-classification
- text-generation
- feature-extraction
tags:
- chemistry
- biology
---
# SLM4CRP_with_RTs Dataset
[](https://arxiv.org/abs/2404.09606)
## Overview
The SLM4CRP_with_RTs dataset is a chemical reaction predictions (CRPs) dataset featuring reaction type (RT) labels, developed from the Mol-Instruction. We introduce a novel knowledge elicitation approach integrating a **self-feedback mechanism** with data curation using large language models (LLMs). This dataset embodies domain-specific knowledge by combining reactants and products of chemical reactions with annotated RTs, demonstrating that domain-integrated data can enhance the capabilities of LLMs.
## How to Use
The dataset is primarily intended for researchers and developers working on chemical reaction prediction models. To facilitate access and use, the dataset can be found via the GitHub repository link: [SLM4CRP](https://github.com/AI-HPC-Research-Team/SLM4CRP). Users can download the dataset and use it as per their requirements for model training and evaluation in the field of computational chemistry and machine learning.
## Data Visualization
To balance accuracy and the number of RTs during knowledge elicitation by LLMs, we evaluated four encoding methods:
- Direct output vector
- Output minus input vector
- Concatenated input-output vector
- Dot product of input and output vectors
These vectors are encoded using the LLM Text+Chem T5 model. We also discuss the rationale for exploring different numbers of clusters, emphasizing that the choice of encoding and clustering can significantly influence annotation accuracy \(Acc\).
**Encoding methods**:
Our comparative analysis of \(Acc\) across various encoding vectors and task types focuses on a range of \(N\). As shown in Figure 1(a), \(Acc\) gradually declines within the adaptable range of \(N\) from 3 to 12. Our results highlight that the concatenated input-output vector consistently achieves the highest annotation accuracy among different tasks, maintaining over 70% accuracy even when \(N\) is set to 10. This underscores the concatenated vector's ability to capture the nuances of chemical RTs.
<img src="data_visualization.png" alt="Data Visualization" width="90%">
**Figure 1: Performance of encoding vector self-feedback annotation and clustering**
**Clustering visualization and implications**:
We explore the clustering results for the test dataset, focusing on the optimal encoding method (concatenated input-output vector) for selected cluster numbers, \(N=6\) and \(N=10\). Visualizations in Figure 1(b) show the distribution of clusters, which were reduced from high-dimensional encoded vectors to two dimensions for better visualization. These visualizations reveal that chemical reactions for each task display distinct knowledge patterns, providing essential prompt information for generating reaction content.
## Citation
```
@misc{liu2024selffeedback,
title={A Self-feedback Knowledge Elicitation Approach for Chemical Reaction Predictions},
author={Pengfei Liu and Jun Tao and Zhixiang Ren},
year={2024},
eprint={2404.09606},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## Acknowledgments
The development of the SLM4CRP_with_RTs dataset was greatly inspired by the Mol-Instruction approach to CRPs. We are also thankful to Hugging Face for providing the initial model weights that facilitated our research. |
ZhangShenao/0.0_dataup_noreplacerej_40g_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: test_prefs_1
num_bytes: 13596717
num_examples: 2000
- name: train_prefs_1
num_bytes: 140204482
num_examples: 20378
- name: test_prefs_2
num_bytes: 13779073
num_examples: 2000
- name: train_prefs_2
num_bytes: 142688869
num_examples: 20378
download_size: 171615921
dataset_size: 310269141
configs:
- config_name: default
data_files:
- split: test_prefs_1
path: data/test_prefs_1-*
- split: train_prefs_1
path: data/train_prefs_1-*
- split: test_prefs_2
path: data/test_prefs_2-*
- split: train_prefs_2
path: data/train_prefs_2-*
---
# Dataset Card for "0.0_dataup_noreplacerej_40g_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jan-hq/nvidia_helpsteer_binarized | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 106899146
num_examples: 35331
- name: validation
num_bytes: 5537881
num_examples: 1789
download_size: 23814863
dataset_size: 112437027
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "nvidia_helpsteer_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
thanhduycao/data_synthesis | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: old_transcription
dtype: string
splits:
- name: train
num_bytes: 2364881573
num_examples: 4430
download_size: 559968141
dataset_size: 2364881573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_synthesis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manishiitg/en-hi-raw | ---
dataset_info:
features:
- name: org_dataset
dtype: string
- name: en
dtype: string
- name: hi
dtype: string
splits:
- name: train
num_bytes: 14916818007
num_examples: 14364371
download_size: 5629685665
dataset_size: 14916818007
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
openclimatefix/era5-land | ---
license: mit
---
This dataset is comprised of ECMWF ERA5-Land data covering 2014 to October 2022. This data is on a 0.1 degree grid and has fewer variables than the standard ERA5-reanalysis, but at a higher resolution. All the data has been downloaded as NetCDF files from the Copernicus Data Store and converted to Zarr using Xarray, then uploaded here. Each file is one day, and holds 24 timesteps. |
kirillgoltsman/databricks-dolly-llama2-1k | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 478770
num_examples: 1000
download_size: 299975
dataset_size: 478770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Brokzo/brokzodata | ---
license: llama2
---
|
arturslogins/lv-data | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: start_time
dtype: string
- name: end_time
dtype: string
splits:
- name: train
num_bytes: 737602.0
num_examples: 19
- name: validation
num_bytes: 737602.0
num_examples: 19
download_size: 1472504
dataset_size: 1475204.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064279 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot
dataset_config: mathemakitten--winobias_antistereotype_test_cot
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
jonathang/dreambooth-hackathon-images-protein2 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 3901067.0
num_examples: 16
download_size: 3846228
dataset_size: 3901067.0
---
# Dataset Card for "dreambooth-hackathon-images-protein2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rodrigooo745/vegeta | ---
license: openrail
---
|
galsenai/french-wolof-translation | ---
dataset_info:
features:
- name: translation
struct:
- name: fr
dtype: string
- name: wo
dtype: string
splits:
- name: train
num_bytes: 2006697
num_examples: 7263
- name: dev
num_bytes: 493299
num_examples: 1816
- name: test
num_bytes: 632727
num_examples: 2270
download_size: 2224550
dataset_size: 3132723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
<p align="center">
<img src="https://huggingface.co/datasets/galsenai/french-wolof-translation/resolve/main/distribution.png" />
</p>
|
gldalessandro/FiguresSD | ---
license: unknown
---
|
HuggingFaceGECLM/REDDIT_threaded | ---
dataset_info:
features:
- name: start_date
dtype: string
- name: end_date
dtype: string
- name: thread_id
dtype: string
- name: subreddit
dtype: string
- name: subreddit_id
dtype: string
- name: total_score
dtype: int64
- name: text
dtype: string
- name: num_messages
dtype: int64
- name: avg_score
dtype: float64
splits:
- name: tifu
num_bytes: 454388196
num_examples: 208970
- name: Documentaries
num_bytes: 93629662
num_examples: 54210
- name: explainlikeimfive
num_bytes: 1046681120
num_examples: 652525
- name: WritingPrompts
num_bytes: 845178305
num_examples: 211245
- name: changemyview
num_bytes: 951562359
num_examples: 134667
- name: LifeProTips
num_bytes: 270244448
num_examples: 237375
- name: todayilearned
num_bytes: 922018228
num_examples: 667200
- name: science
num_bytes: 293085684
num_examples: 153849
- name: askscience
num_bytes: 485441735
num_examples: 212470
- name: ifyoulikeblank
num_bytes: 79838466
num_examples: 84536
- name: Foodforthought
num_bytes: 47246819
num_examples: 18475
- name: IWantToLearn
num_bytes: 74180798
num_examples: 42927
- name: bestof
num_bytes: 107926710
num_examples: 44408
- name: IAmA
num_bytes: 387869949
num_examples: 177616
- name: socialskills
num_bytes: 208582458
num_examples: 103156
- name: programming
num_bytes: 350054262
num_examples: 168154
- name: gardening
num_bytes: 349336406
num_examples: 365384
- name: Damnthatsinteresting
num_bytes: 153439357
num_examples: 166914
- name: lifehacks
num_bytes: 33642034
num_examples: 36265
- name: travel
num_bytes: 354429861
num_examples: 204434
- name: relationship_advice
num_bytes: 3779011482
num_examples: 1553725
- name: philosophy
num_bytes: 206723923
num_examples: 44796
- name: Fitness
num_bytes: 666442392
num_examples: 371800
- name: YouShouldKnow
num_bytes: 57320879
num_examples: 30239
- name: history
num_bytes: 130094622
num_examples: 53636
- name: books
num_bytes: 324225643
num_examples: 179976
- name: technology
num_bytes: 418616319
num_examples: 225749
- name: UpliftingNews
num_bytes: 68698295
num_examples: 39274
- name: GetMotivated
num_bytes: 75648561
num_examples: 52652
- name: gadgets
num_bytes: 55725657
num_examples: 33467
- name: space
num_bytes: 186780078
num_examples: 117530
- name: sports
num_bytes: 92005983
num_examples: 79903
- name: Showerthoughts
num_bytes: 842958842
num_examples: 1337755
- name: personalfinance
num_bytes: 1855551236
num_examples: 847951
- name: buildapc
num_bytes: 4208530709
num_examples: 1892762
- name: EatCheapAndHealthy
num_bytes: 65934639
num_examples: 38706
- name: boardgames
num_bytes: 398914878
num_examples: 191826
- name: malefashionadvice
num_bytes: 335564558
num_examples: 231363
- name: femalefashionadvice
num_bytes: 114962358
num_examples: 49612
- name: scifi
num_bytes: 86198725
num_examples: 50268
- name: Fantasy
num_bytes: 288057115
num_examples: 116359
- name: Games
num_bytes: 540015295
num_examples: 211345
- name: bodyweightfitness
num_bytes: 146251112
num_examples: 74165
- name: SkincareAddiction
num_bytes: 518704351
num_examples: 288116
- name: podcasts
num_bytes: 62920098
num_examples: 46116
- name: suggestmeabook
num_bytes: 244723416
num_examples: 174635
- name: AskHistorians
num_bytes: 638056364
num_examples: 151306
- name: mildlyinteresting
num_bytes: 419997082
num_examples: 782806
- name: DIY
num_bytes: 218417925
num_examples: 149760
- name: gaming
num_bytes: 1760077944
num_examples: 1838783
download_size: 15593798957
dataset_size: 26315907338
---
# Dataset Card for "REDDIT_threaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
taka-yayoi/databricks-dolly-15k-ja | ---
license: cc-by-sa-3.0
---
こちらのデータセットを活用させていただき、Dollyのトレーニングスクリプトで使えるように列名の変更とJSONLへの変換を行っています。
https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
Dolly
https://github.com/databrickslabs/dolly |
tmuzaffarmydost/data-parsing-new-dataset-v3-updated-labels | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 293773894.0
num_examples: 146
download_size: 31036732
dataset_size: 293773894.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-parsing-new-dataset-v3-updated-labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
m-a-p/CodeEditorBench | ---
license: apache-2.0
---
# CodeEditorBench
[**🌐 Homepage**](https://codeeditorbench.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**📖 arXiv**](https://arxiv.org/pdf/2404.03543.pdf) | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench)
## Introduction
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities.
CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.

## Results
<div style="display: flex; justify-content: space-around; align-items: center;">
<img src="Models_Zero_Shot.png" alt="First Image Description" style="width: 48%;" />
<img src="win_rate_zero.png" alt="Second Image Description" style="width: 48%;" />
</div>
We propose evaluating LLMs across four scenarios capturing various code editing capabilities, namely code debug, code translate, code polish, and code requirement switch.The figure in left depicts various model performances across the four scenarios available in CodeEditorBench\_Plus in a radial plot – highlighting how relative differences across models change across the scenarios. We also give the Performance of open-source and closed-source models on CodeEditorBench\_Plus in zero-shot evaluated through win\_rate in the right figure.
🎯All results of models are generated by greedy decoding.
✨Code Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed.
## Contact
<!-- - Jiawei Guo: moriatysss152@gmail.com
- Ziming Li :
- Xueling Liu:
- Kaijing Ma: -->
- Ge Zhang: zhangge@01.ai
- Wenhu Chen: wenhuchen@uwaterloo.ca
- Jie Fu: jiefu@ust.hk
## Citation
**BibTeX:**
```bibtex
@misc{guo2024codeeditorbench,
title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models},
author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Yizhi LI and Ruibo Liu and Yue Wang and Shuyue Guo and Xingwei Qu and Xiang Yue and Ge Zhang and Wenhu Chen and Jie Fu},
year={2024},
eprint={2404.03543},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
``` |
khaled123/creativeRES | ---
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- table-question-answering
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 68246544
num_examples: 15390
download_size: 30188817
dataset_size: 68246544
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Baseta/BasetaTube-VM | ---
license: mit
---
|
liuyanchen1015/MULTI_VALUE_mrpc_perfect_already | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 43070
num_examples: 144
- name: train
num_bytes: 105147
num_examples: 367
- name: validation
num_bytes: 12484
num_examples: 44
download_size: 113863
dataset_size: 160701
---
# Dataset Card for "MULTI_VALUE_mrpc_perfect_already"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MohamedSaeed-dev/python-text-to-code | ---
license: llama2
---
|
autoevaluate/autoeval-eval-futin__feed-sen_en_-1de085-2240171540 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-3b
metrics: []
dataset_name: futin/feed
dataset_config: sen_en_
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/feed
* Config: sen_en_
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
D3STRON/music_lyrics_500k | ---
dataset_info:
features:
- name: Lyric
dtype: string
- name: genre
dtype: string
- name: keywords
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 649484743
num_examples: 500000
- name: test
num_bytes: 65124213
num_examples: 50000
download_size: 387245688
dataset_size: 714608956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mHossain/final_train_v4_test_280000 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 6704579.7
num_examples: 18000
- name: test
num_bytes: 744953.3
num_examples: 2000
download_size: 3210023
dataset_size: 7449533.0
---
# Dataset Card for "final_train_v4_test_280000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/miyagi | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/miyagi"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.536065 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/10795217955d95e2543993f8e83fe5c8.960x960x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/miyagi">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">MiyaGi</div>
<a href="https://genius.com/artists/miyagi">
<div style="text-align: center; font-size: 14px;">@miyagi</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/miyagi).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/miyagi")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|147| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/miyagi")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
sardinelab/wmt23-for-Tower | ---
dataset_info:
features:
- name: lp
dtype: string
- name: src
dtype: string
- name: ref
dtype: string
- name: prompt
dtype: string
- name: tower_7b_mt
dtype: string
- name: tower_7b_comet
dtype: float64
splits:
- name: train
num_bytes: 1665470
num_examples: 1106
download_size: 966977
dataset_size: 1665470
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M | ---
pretty_name: Evaluation run of MBZUAI/LaMini-GPT-774M
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MBZUAI/LaMini-GPT-774M](https://huggingface.co/MBZUAI/LaMini-GPT-774M) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T01:05:23.378180](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M/blob/main/results_2023-10-17T01-05-23.378180.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03544463087248322,\n\
\ \"em_stderr\": 0.0018935573437954087,\n \"f1\": 0.12509857382550346,\n\
\ \"f1_stderr\": 0.0025549304231766066,\n \"acc\": 0.2829518547750592,\n\
\ \"acc_stderr\": 0.006964941277847027\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954087,\n\
\ \"f1\": 0.12509857382550346,\n \"f1_stderr\": 0.0025549304231766066\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5659037095501184,\n\
\ \"acc_stderr\": 0.013929882555694054\n }\n}\n```"
repo_url: https://huggingface.co/MBZUAI/LaMini-GPT-774M
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T01_05_23.378180
path:
- '**/details_harness|drop|3_2023-10-17T01-05-23.378180.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T01-05-23.378180.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T01_05_23.378180
path:
- '**/details_harness|gsm8k|5_2023-10-17T01-05-23.378180.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T01-05-23.378180.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T01_05_23.378180
path:
- '**/details_harness|winogrande|5_2023-10-17T01-05-23.378180.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T01-05-23.378180.parquet'
- config_name: results
data_files:
- split: 2023_10_17T01_05_23.378180
path:
- results_2023-10-17T01-05-23.378180.parquet
- split: latest
path:
- results_2023-10-17T01-05-23.378180.parquet
---
# Dataset Card for Evaluation run of MBZUAI/LaMini-GPT-774M
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MBZUAI/LaMini-GPT-774M
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [MBZUAI/LaMini-GPT-774M](https://huggingface.co/MBZUAI/LaMini-GPT-774M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T01:05:23.378180](https://huggingface.co/datasets/open-llm-leaderboard/details_MBZUAI__LaMini-GPT-774M/blob/main/results_2023-10-17T01-05-23.378180.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954087,
"f1": 0.12509857382550346,
"f1_stderr": 0.0025549304231766066,
"acc": 0.2829518547750592,
"acc_stderr": 0.006964941277847027
},
"harness|drop|3": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954087,
"f1": 0.12509857382550346,
"f1_stderr": 0.0025549304231766066
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5659037095501184,
"acc_stderr": 0.013929882555694054
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0 | ---
pretty_name: Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [jondurbin/airoboros-l2-7b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-22T19:06:34.610591](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0/blob/main/results_2023-10-22T19-06-34.610591.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2790478187919463,\n\
\ \"em_stderr\": 0.004593378120842089,\n \"f1\": 0.3568791946308748,\n\
\ \"f1_stderr\": 0.004541856353836489,\n \"acc\": 0.3714854775657577,\n\
\ \"acc_stderr\": 0.008787294914639698\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.2790478187919463,\n \"em_stderr\": 0.004593378120842089,\n\
\ \"f1\": 0.3568791946308748,\n \"f1_stderr\": 0.004541856353836489\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03184230477634572,\n \
\ \"acc_stderr\": 0.004836348558260957\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7111286503551697,\n \"acc_stderr\": 0.01273824127101844\n\
\ }\n}\n```"
repo_url: https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_22T19_06_34.610591
path:
- '**/details_harness|drop|3_2023-10-22T19-06-34.610591.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-22T19-06-34.610591.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_22T19_06_34.610591
path:
- '**/details_harness|gsm8k|5_2023-10-22T19-06-34.610591.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-22T19-06-34.610591.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_22T19_06_34.610591
path:
- '**/details_harness|winogrande|5_2023-10-22T19-06-34.610591.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-22T19-06-34.610591.parquet'
- config_name: results
data_files:
- split: 2023_10_22T19_06_34.610591
path:
- results_2023-10-22T19-06-34.610591.parquet
- split: latest
path:
- results_2023-10-22T19-06-34.610591.parquet
---
# Dataset Card for Evaluation run of jondurbin/airoboros-l2-7b-gpt4-2.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [jondurbin/airoboros-l2-7b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-22T19:06:34.610591](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0/blob/main/results_2023-10-22T19-06-34.610591.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2790478187919463,
"em_stderr": 0.004593378120842089,
"f1": 0.3568791946308748,
"f1_stderr": 0.004541856353836489,
"acc": 0.3714854775657577,
"acc_stderr": 0.008787294914639698
},
"harness|drop|3": {
"em": 0.2790478187919463,
"em_stderr": 0.004593378120842089,
"f1": 0.3568791946308748,
"f1_stderr": 0.004541856353836489
},
"harness|gsm8k|5": {
"acc": 0.03184230477634572,
"acc_stderr": 0.004836348558260957
},
"harness|winogrande|5": {
"acc": 0.7111286503551697,
"acc_stderr": 0.01273824127101844
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
AdapterOcean/med_alpaca_standardized_cluster_83_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 17522870
num_examples: 10178
download_size: 8845207
dataset_size: 17522870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_83_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlabonne/Evol-Instruct-Python-26k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 39448413.53337422
num_examples: 26588
download_size: 22381182
dataset_size: 39448413.53337422
---
# Evol-Instruct-Python-26k
Filtered version of the [`nickrosh/Evol-Instruct-Code-80k-v1`](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) dataset that only keeps Python code (26,588 samples). You can find a smaller version of it here [`mlabonne/Evol-Instruct-Python-1k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-1k).
Here is the distribution of the number of tokens in each row (instruction + output) using Llama's tokenizer:
 |
XxFelipaoXx354/Leoncio | ---
license: openrail
---
|
SEACrowd/tydiqa_id | ---
tags:
- question-answering
language:
- ind
---
# tydiqa_id
TyDiQA dataset is collected from Wikipedia articles with human-annotated question and answer pairs covering 11 languages.
The question-answer pairs are collected for each language without using translation services.
IndoNLG uses the Indonesian data from the secondary Gold passage task of the original TyDiQA dataset and
randomly split off 15% of the training data and use it as the test set.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{clark-etal-2020-tydi,
title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
author = "Clark, Jonathan H. and
Choi, Eunsol and
Collins, Michael and
Garrette, Dan and
Kwiatkowski, Tom and
Nikolaev, Vitaly and
Palomaki, Jennimaria",
journal = "Transactions of the Association for Computational Linguistics",
volume = "8",
year = "2020",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/2020.tacl-1.30",
doi = "10.1162/tacl_a_00317",
pages = "454--470",
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898"
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlg](https://github.com/IndoNLP/indonlg)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
open-llm-leaderboard/details_chinoll__Yi-6b-200k-dpo | ---
pretty_name: Evaluation run of chinoll/Yi-6b-200k-dpo
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chinoll/Yi-6b-200k-dpo](https://huggingface.co/chinoll/Yi-6b-200k-dpo) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chinoll__Yi-6b-200k-dpo\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-04T16:10:17.402126](https://huggingface.co/datasets/open-llm-leaderboard/details_chinoll__Yi-6b-200k-dpo/blob/main/results_2023-12-04T16-10-17.402126.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6274780891690785,\n\
\ \"acc_stderr\": 0.03214198982171106,\n \"acc_norm\": 0.6382309545732996,\n\
\ \"acc_norm_stderr\": 0.03286487964348697,\n \"mc1\": 0.3047735618115055,\n\
\ \"mc1_stderr\": 0.016114124156882455,\n \"mc2\": 0.4551491788416383,\n\
\ \"mc2_stderr\": 0.014826375266749701\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.39505119453924914,\n \"acc_stderr\": 0.014285898292938172,\n\
\ \"acc_norm\": 0.4308873720136519,\n \"acc_norm_stderr\": 0.014471133392642475\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5570603465445131,\n\
\ \"acc_stderr\": 0.004957182635381807,\n \"acc_norm\": 0.7452698665604461,\n\
\ \"acc_norm_stderr\": 0.004348189459336535\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137282,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137282\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n\
\ \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \
\ \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n\
\ \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n\
\ \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n\
\ \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\"\
: 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.04576665403207762,\n\
\ \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.04576665403207762\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.03196758697835362,\n\
\ \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.03196758697835362\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n\
\ \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n\
\ \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419035,\n\
\ \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419035\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.47354497354497355,\n \"acc_stderr\": 0.02571523981134676,\n \"\
acc_norm\": 0.47354497354497355,\n \"acc_norm_stderr\": 0.02571523981134676\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n\
\ \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n\
\ \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n\
\ \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n\
\ \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361009,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361009\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\"\
: 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n\
\ \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8080808080808081,\n \"acc_stderr\": 0.028057791672989017,\n \"\
acc_norm\": 0.8080808080808081,\n \"acc_norm_stderr\": 0.028057791672989017\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.02503387058301518,\n\
\ \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.02503387058301518\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6487179487179487,\n \"acc_stderr\": 0.024203665177902803,\n\
\ \"acc_norm\": 0.6487179487179487,\n \"acc_norm_stderr\": 0.024203665177902803\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253252,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253252\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7605042016806722,\n \"acc_stderr\": 0.027722065493361276,\n\
\ \"acc_norm\": 0.7605042016806722,\n \"acc_norm_stderr\": 0.027722065493361276\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3973509933774834,\n \"acc_stderr\": 0.0399552400768168,\n \"acc_norm\"\
: 0.3973509933774834,\n \"acc_norm_stderr\": 0.0399552400768168\n },\n\
\ \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8385321100917431,\n\
\ \"acc_stderr\": 0.01577623925616323,\n \"acc_norm\": 0.8385321100917431,\n\
\ \"acc_norm_stderr\": 0.01577623925616323\n },\n \"harness|hendrycksTest-high_school_statistics|5\"\
: {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n\
\ \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7794117647058824,\n \"acc_stderr\": 0.02910225438967409,\n \"\
acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02910225438967409\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \
\ \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n\
\ \"acc_stderr\": 0.0318114974705536,\n \"acc_norm\": 0.6591928251121076,\n\
\ \"acc_norm_stderr\": 0.0318114974705536\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n\
\ \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.768595041322314,\n \"acc_stderr\": 0.03849856098794086,\n \"acc_norm\"\
: 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794086\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n\
\ \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n\
\ \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742179,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742179\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.39285714285714285,\n\
\ \"acc_stderr\": 0.04635550135609976,\n \"acc_norm\": 0.39285714285714285,\n\
\ \"acc_norm_stderr\": 0.04635550135609976\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.02126271940040697,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.02126271940040697\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8109833971902938,\n\
\ \"acc_stderr\": 0.014000791294407004,\n \"acc_norm\": 0.8109833971902938,\n\
\ \"acc_norm_stderr\": 0.014000791294407004\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3854748603351955,\n\
\ \"acc_stderr\": 0.016277927039638193,\n \"acc_norm\": 0.3854748603351955,\n\
\ \"acc_norm_stderr\": 0.016277927039638193\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.025457756696667878,\n\
\ \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.025457756696667878\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7331189710610932,\n\
\ \"acc_stderr\": 0.025122637608816657,\n \"acc_norm\": 0.7331189710610932,\n\
\ \"acc_norm_stderr\": 0.025122637608816657\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.025006469755799208,\n\
\ \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.025006469755799208\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.46099290780141844,\n \"acc_stderr\": 0.029736592526424438,\n \
\ \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.029736592526424438\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.48565840938722293,\n\
\ \"acc_stderr\": 0.012764981829524265,\n \"acc_norm\": 0.48565840938722293,\n\
\ \"acc_norm_stderr\": 0.012764981829524265\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.02902942281568139,\n\
\ \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.02902942281568139\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.673202614379085,\n \"acc_stderr\": 0.01897542792050721,\n \
\ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.01897542792050721\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7551020408163265,\n \"acc_stderr\": 0.02752963744017493,\n\
\ \"acc_norm\": 0.7551020408163265,\n \"acc_norm_stderr\": 0.02752963744017493\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n\
\ \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n\
\ \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263686,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263686\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n\
\ \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n\
\ \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3047735618115055,\n\
\ \"mc1_stderr\": 0.016114124156882455,\n \"mc2\": 0.4551491788416383,\n\
\ \"mc2_stderr\": 0.014826375266749701\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7308602999210734,\n \"acc_stderr\": 0.012464911951268736\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11372251705837756,\n \
\ \"acc_stderr\": 0.008744810131034042\n }\n}\n```"
repo_url: https://huggingface.co/chinoll/Yi-6b-200k-dpo
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|arc:challenge|25_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|gsm8k|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hellaswag|10_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-10-17.402126.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T16-10-17.402126.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- '**/details_harness|winogrande|5_2023-12-04T16-10-17.402126.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-04T16-10-17.402126.parquet'
- config_name: results
data_files:
- split: 2023_12_04T16_10_17.402126
path:
- results_2023-12-04T16-10-17.402126.parquet
- split: latest
path:
- results_2023-12-04T16-10-17.402126.parquet
---
# Dataset Card for Evaluation run of chinoll/Yi-6b-200k-dpo
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chinoll/Yi-6b-200k-dpo
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [chinoll/Yi-6b-200k-dpo](https://huggingface.co/chinoll/Yi-6b-200k-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chinoll__Yi-6b-200k-dpo",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T16:10:17.402126](https://huggingface.co/datasets/open-llm-leaderboard/details_chinoll__Yi-6b-200k-dpo/blob/main/results_2023-12-04T16-10-17.402126.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6274780891690785,
"acc_stderr": 0.03214198982171106,
"acc_norm": 0.6382309545732996,
"acc_norm_stderr": 0.03286487964348697,
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882455,
"mc2": 0.4551491788416383,
"mc2_stderr": 0.014826375266749701
},
"harness|arc:challenge|25": {
"acc": 0.39505119453924914,
"acc_stderr": 0.014285898292938172,
"acc_norm": 0.4308873720136519,
"acc_norm_stderr": 0.014471133392642475
},
"harness|hellaswag|10": {
"acc": 0.5570603465445131,
"acc_stderr": 0.004957182635381807,
"acc_norm": 0.7452698665604461,
"acc_norm_stderr": 0.004348189459336535
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137282,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137282
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.039812405437178615,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.039812405437178615
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.04576665403207762,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.04576665403207762
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.77,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6042553191489362,
"acc_stderr": 0.03196758697835362,
"acc_norm": 0.6042553191489362,
"acc_norm_stderr": 0.03196758697835362
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6137931034482759,
"acc_stderr": 0.04057324734419035,
"acc_norm": 0.6137931034482759,
"acc_norm_stderr": 0.04057324734419035
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47354497354497355,
"acc_stderr": 0.02571523981134676,
"acc_norm": 0.47354497354497355,
"acc_norm_stderr": 0.02571523981134676
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361009,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361009
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8080808080808081,
"acc_stderr": 0.028057791672989017,
"acc_norm": 0.8080808080808081,
"acc_norm_stderr": 0.028057791672989017
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.02503387058301518,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.02503387058301518
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253252,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253252
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7605042016806722,
"acc_stderr": 0.027722065493361276,
"acc_norm": 0.7605042016806722,
"acc_norm_stderr": 0.027722065493361276
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3973509933774834,
"acc_stderr": 0.0399552400768168,
"acc_norm": 0.3973509933774834,
"acc_norm_stderr": 0.0399552400768168
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8385321100917431,
"acc_stderr": 0.01577623925616323,
"acc_norm": 0.8385321100917431,
"acc_norm_stderr": 0.01577623925616323
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02910225438967409,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02910225438967409
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.0318114974705536,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.0318114974705536
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794086,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794086
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742179,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742179
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.39285714285714285,
"acc_stderr": 0.04635550135609976,
"acc_norm": 0.39285714285714285,
"acc_norm_stderr": 0.04635550135609976
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.02126271940040697,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.02126271940040697
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8109833971902938,
"acc_stderr": 0.014000791294407004,
"acc_norm": 0.8109833971902938,
"acc_norm_stderr": 0.014000791294407004
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3854748603351955,
"acc_stderr": 0.016277927039638193,
"acc_norm": 0.3854748603351955,
"acc_norm_stderr": 0.016277927039638193
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7287581699346405,
"acc_stderr": 0.025457756696667878,
"acc_norm": 0.7287581699346405,
"acc_norm_stderr": 0.025457756696667878
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7331189710610932,
"acc_stderr": 0.025122637608816657,
"acc_norm": 0.7331189710610932,
"acc_norm_stderr": 0.025122637608816657
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.025006469755799208,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.025006469755799208
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.029736592526424438,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.029736592526424438
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.48565840938722293,
"acc_stderr": 0.012764981829524265,
"acc_norm": 0.48565840938722293,
"acc_norm_stderr": 0.012764981829524265
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.02902942281568139,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.02902942281568139
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.01897542792050721,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.01897542792050721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7551020408163265,
"acc_stderr": 0.02752963744017493,
"acc_norm": 0.7551020408163265,
"acc_norm_stderr": 0.02752963744017493
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263686,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263686
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882455,
"mc2": 0.4551491788416383,
"mc2_stderr": 0.014826375266749701
},
"harness|winogrande|5": {
"acc": 0.7308602999210734,
"acc_stderr": 0.012464911951268736
},
"harness|gsm8k|5": {
"acc": 0.11372251705837756,
"acc_stderr": 0.008744810131034042
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
CyberHarem/inaba_tewi_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of inaba_tewi/因幡てゐ/이나바테위 (Touhou)
This is the dataset of inaba_tewi/因幡てゐ/이나바테위 (Touhou), containing 500 images and their tags.
The core tags of this character are `animal_ears, rabbit_ears, short_hair, red_eyes, black_hair, brown_hair, rabbit_girl, tail, rabbit_tail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 492.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inaba_tewi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 313.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inaba_tewi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1137 | 637.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inaba_tewi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 451.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inaba_tewi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1137 | 844.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/inaba_tewi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/inaba_tewi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, carrot_necklace, pendant, pink_dress, solo, open_mouth, short_sleeves, looking_at_viewer, simple_background, smile, puffy_sleeves, white_background |
| 1 | 25 |  |  |  |  |  | 1girl, carrot_necklace, puffy_short_sleeves, solo, pink_dress, smile, floppy_ears, looking_at_viewer, simple_background, bangs, blush, white_background, closed_mouth, hair_between_eyes, ribbon-trimmed_dress, frills |
| 2 | 13 |  |  |  |  |  | blush, 1girl, bangs, floppy_ears, loli, looking_at_viewer, nipples, completely_nude, flat_chest, solo, pussy, simple_background, full_body, hair_between_eyes, navel, open_mouth, smile, barefoot, white_background, spread_legs, :3, censored, small_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | carrot_necklace | pendant | pink_dress | solo | open_mouth | short_sleeves | looking_at_viewer | simple_background | smile | puffy_sleeves | white_background | puffy_short_sleeves | floppy_ears | bangs | blush | closed_mouth | hair_between_eyes | ribbon-trimmed_dress | frills | loli | nipples | completely_nude | flat_chest | pussy | full_body | navel | barefoot | spread_legs | :3 | censored | small_breasts |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:----------|:-------------|:-------|:-------------|:----------------|:--------------------|:--------------------|:--------|:----------------|:-------------------|:----------------------|:--------------|:--------|:--------|:---------------|:--------------------|:-----------------------|:---------|:-------|:----------|:------------------|:-------------|:--------|:------------|:--------|:-----------|:--------------|:-----|:-----------|:----------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 1 | 25 |  |  |  |  |  | X | X | | X | X | | | X | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | | | | X | X | | X | X | X | | X | | X | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
ntt123/infore | ---
license: cc-by-nc-4.0
---
|
WillHeld/ACL-OCL-FORK | ---
dataset_info:
features:
- name: acl_id
dtype: string
- name: abstract
dtype: string
- name: full_text
dtype: string
- name: corpus_paper_id
dtype: string
- name: pdf_hash
dtype: string
- name: numcitedby
dtype: string
- name: url
dtype: string
- name: publisher
dtype: string
- name: address
dtype: string
- name: year
dtype: string
- name: month
dtype: string
- name: booktitle
dtype: string
- name: author
dtype: string
- name: title
dtype: string
- name: pages
dtype: string
- name: doi
dtype: string
- name: number
dtype: string
- name: volume
dtype: string
- name: journal
dtype: string
- name: editor
dtype: string
- name: isbn
dtype: string
- name: ENTRYTYPE
dtype: string
- name: ID
dtype: string
- name: language
dtype: string
- name: note
dtype: string
- name: Model Predicted Topics
dtype: string
- name: json
dtype: string
- name: countries
sequence: string
- name: langs
sequence: string
- name: lang_mentions
dtype: string
- name: lang_mentions_sample
dtype: string
- name: tok_len
dtype: string
- name: open_ai_resp
dtype: string
- name: final_langs
sequence: string
- name: resource
dtype: bool
- name: methods
dtype: bool
- name: deployment
dtype: bool
- name: gpu
dtype: bool
splits:
- name: train
num_bytes: 6537125368
num_examples: 58053
download_size: 2186687108
dataset_size: 6537125368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ACL-OCL-FORK"
This is a fork of the ACL-OCL Corpus (https://arxiv.org/abs/2305.14996) adding information about Geography of Authors Affiliation and the Languages studied in the paper.
For the added information, please cite
```
@misc{held2023material,
title={A Material Lens on Coloniality in NLP},
author={William Held and Camille Harris and Michael Best and Diyi Yang},
year={2023},
eprint={2311.08391},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
If you use this data, please cite the original corpus below and reach out to me!
```
@misc{rohatgi2023acl,
title={The ACL OCL Corpus: Advancing Open Science in Computational Linguistics},
author={Shaurya Rohatgi and Yanxia Qin and Benjamin Aw and Niranjana Unnithan and Min-Yen Kan},
year={2023},
eprint={2305.14996},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WizardLM/WizardLM_evol_instruct_V2_196k | ---
license: mit
---
## News
- 🔥 🔥 🔥 [08/11/2023] We release **WizardMath** Models.
- 🔥 Our **WizardMath-70B-V1.0** model slightly outperforms some closed-source LLMs on the GSM8K, including **ChatGPT 3.5**, **Claude Instant 1** and **PaLM 2 540B**.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **81.6 pass@1** on the [GSM8k Benchmarks](https://github.com/openai/grade-school-math), which is **24.8** points higher than the SOTA open-source LLM.
- 🔥 Our **WizardMath-70B-V1.0** model achieves **22.7 pass@1** on the [MATH Benchmarks](https://github.com/hendrycks/math), which is **9.2** points higher than the SOTA open-source LLM.
| Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License|
| ----- |------| ---- |------|-------| ----- | ----- |
| WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> |
| WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>|
<font size=4>
| <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>WizardEval</sup> | <sup>HumanEval</sup> | <sup>License</sup>|
| ----- |------| ---- |------|-------| ----- | ----- | ----- |
| <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> | <sup>101.4% </sup>|<sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> |
| <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | <sup>99.3% </sup> |<sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>|
| <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | <sup>97.8% </sup> | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> |
| <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | <sup>89.1% </sup> |<sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>|
| <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | <sup>78.0% </sup> |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>|
| <sup>WizardCoder-15B-V1.0</sup> | <sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | || |<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
</font>
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1669364947606982656
This datasets contains 143K mixture evolved data of Alpaca and ShareGPT.
This is the latest optimized version of Evol-Instruct training data of WizardLM model.
Due to the data usage license, please **merge** the original [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) with this one to get the **final full-dataset**, which would consist of around 196k rows of data.
|
cs_restaurants | ---
annotations_creators:
- found
language_creators:
- expert-generated
- machine-generated
language:
- cs
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-san-francisco-restaurants
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
- language-modeling
- masked-language-modeling
paperswithcode_id: czech-restaurant-information
pretty_name: Czech Restaurant
tags:
- intent-to-text
dataset_info:
features:
- name: dialogue_act
dtype: string
- name: delexicalized_dialogue_act
dtype: string
- name: text
dtype: string
- name: delexicalized_text
dtype: string
config_name: CSRestaurants
splits:
- name: train
num_bytes: 654071
num_examples: 3569
- name: validation
num_bytes: 181528
num_examples: 781
- name: test
num_bytes: 191334
num_examples: 842
download_size: 1463019
dataset_size: 1026933
---
# Dataset Card for Czech Restaurant
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Czech restaurants homepage](https://github.com/UFAL-DSG/cs_restaurant_dataset)
- **Paper:** [Czech restaurants on Arxiv](https://arxiv.org/abs/1910.05298)
### Dataset Summary
This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.
### Supported Tasks and Leaderboards
- `other-intent-to-text`: The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.
### Languages
The entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.
## Dataset Structure
### Data Instances
Example of a data instance:
```
{
"da": "?request(area)",
"delex_da": "?request(area)",
"text": "Jakou lokalitu hledáte ?",
"delex_text": "Jakou lokalitu hledáte ?"
}
```
### Data Fields
- `da`: input dialogue act
- `delex_da`: input dialogue act, delexicalized
- `text`: output text
- `delex_text`: output text, delexicalized
### Data Splits
The order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.
The training, development, and test set contain 3569, 781, and 842 instances, respectively.
## Dataset Creation
### Curation Rationale
While most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available.
### Source Data
#### Initial Data Collection and Normalization
The original data was collected from the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015).
#### Who are the source language producers?
The original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
This data does not contain personal information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Ondřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).
### Licensing Information
[Creative Commons 4.0 BY-SA](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
```
@article{DBLP:journals/corr/abs-1910-05298,
author = {Ondrej Dusek and
Filip Jurcicek},
title = {Neural Generation for Czech: Data and Baselines},
journal = {CoRR},
volume = {abs/1910.05298},
year = {2019},
url = {http://arxiv.org/abs/1910.05298},
archivePrefix = {arXiv},
eprint = {1910.05298},
timestamp = {Wed, 16 Oct 2019 16:25:53 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset. |
OpenAssistant/oasst_top1_2023-08-25 | ---
license: apache-2.0
task_categories:
- conversational
size_categories:
- 10K<n<100K
---
# OpenAssistant TOP-1 Conversation Threads
- [Guanacco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) style export of the best conversation threads from the [open-assistant.io](https://open-assistant.io/) database
- exported August 25, 2023
- jsonl files with [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations
- train: 12,947 samples / valid: 680 samples |
Softage-AI/vqa-data_sample | ---
license: mit
language:
- en
---
# VQA Dataset
## Description
This dataset offers 16 Visual Question-Answer (VQA) pairs. Each prompt links an image with a user's question and a corresponding answer explaining how to complete the action. This dataset, though limited in its size and scope, serves as an illustration of SoftAge's capabilities in the domain of Visual Question Answering (VQA) for training AI agents.
## Data attributes
- Screenshot Url: Link to the image representing the user’s problem. (string)
- Query: User's question about a specific problem (string)
- Answer: A response to the prompt, explaining how to perform the action. (string)
- References: Links used for generating the response to the prompt. (string)
## Dataset Source
This dataset is curated by the delivery team @SoftAge
## Limitations and Biases
• Limited size (16 samples) might not cover the diversity of the type of questions users can potentially ask.
• The answer or response might not address all the potential complexities of the question.
## Potential Uses
Training VQA models to understand and answer user questions about different user queries based on visuals.
|
alxfgh/PubChem_Drug_Instruction_Tuning | ---
pretty_name: PubChem Drug Instruction Tuning
--- |
froggeric/imatrix | ---
language:
- en
---
# Input files for generating the Importance Matrix
## Which file to use for generating the importance matrix
Not all importance matrices are equal. The best results are obtained when using a source file similar to the
training data. Size also matters: the bigger the model (eg: 70b vs 13b) and the higher the quant (eg: q6k_ vs iq3_xs),
the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed;
for example:
```
cat multilingual.txt code.txt badwords_multilingual.txt > custom_multilingual.matrix
```
Note on **context size** when generating the matrix: in general, a small context size such as 512 is recommended, and community
tests have shown it usually performs than a larger one such as 4096. However, I would argue this is is highly dependent on the
source data you are using: with random tokens or short text a small context makes sense; but when using larger texts, a larger
context matching the size of the texts might be a better choice. Remember that the size is in tokens, which roughly translates
to number of words, not characters.
Another important factor to consider is, **an importance matrix based on english language only will degrade
the model multingual capabilities**. Therefore the use of multilingual matrices are especially important when
considering models like Mistral or Command-R. The same applies to coding capabilities. Choose carefully based
on the model strength.
You will find below descriptions for the various input files provided, to help you choose the correct one.
## Community provided files
**groups_merged**\
_"Here is a decent general purpose imatrix calibration dataset. It should be more diverse than wikitext at ~30k tokens, as it is excerpts of a larger dataset which includes coding examples (which seems quite important!)
This means it's generally higher entropy data compared to wikitext, and it's real data rather than pseudo-randomly generated data.
I get lower KL div than wikitext for the same length and the outputs seem qualitatively better."_ (kalomaze)\
https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384
**group_10_merged**\
(superseeded by groups_merged)\
_"This is about ~50k pseudo-random tokens.
I am getting the best balance between the maximum divergence and the other divergence statistics using this file when quantizing 7b"_ (kalomaze)\
https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8349233
**20k_random_data**\
(superseeded by groups_10_merged)\
https://github.com/ggerganov/llama.cpp/discussions/5006#discussioncomment-8163190
**8k_random_data**\
(superseeded by 20k_random_data)\
https://github.com/ggerganov/llama.cpp/discussions/5006#discussion-6087829
**badwords**\
402 english words that can be considered dirty, naughty, obscene, or otherwise bad words.
This could be useful to remove guard rails.
Compiled from [Shutterstock github repo](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/tree/master)
**badwords_multilingual**\
2580 words that can be considered dirty, naughty, obscene, or otherwise bad words. Includes 26 languages.
This could be useful to remove guard rails.
Compiled from [Shutterstock github repo](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/tree/master)
**ptb.train**\
Penn Treebank (PTB) is a widely used preprocessed large dataset designed for language training. Casing,
punctuation and numbers have been removed from the training data. Recently it has kind of been superseeded
by WikiText which does not have these removals, features a larger vocabulary and full articles (better
suited for models that can take advantage of long term dependencies). However, for importantce matrix training,
PTB is still a valid dataset, which has the advantage of being manually curated, and similar to WikiText,
without being WikiText; this can help against bias.
**WikiText**\
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of
verified Good and Featured articles on Wikipedia. Compared to PTB, WikiText-2 is over 2 times larger and
WikiText-103 is over 110 times larger. As it is composed of full articles, the dataset is well suited for models
that can take advantage of long term dependencies.\
https://huggingface.co/datasets/wikitext
**WikiText_FR**\
70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles"
or "good articles".\
https://huggingface.co/datasets/asi/wikitext_fr
**c4**\
The C4 dataset is a collection text sourced from the public Common Crawl web scrape.
It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish)
in addition to extensive deduplication. C4 dataset was explicitly designed to be English only:
any page that was not given a probability of at least 99% of being English by langdetect was discarded.
**code** (exllamav2)\
Programming
**multilingual** (exllamav2)\
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
**technical** (exllamav2)\
Technical writing.
**tiny**\
Very short stories. Be mindful of the prevalence of _"Once upon a time"_ and _"<|endoftext|>"_.
Extract from [TinyStories dataset](https://huggingface.co/datasets/roneneldan/TinyStories)
**wiki** (exllamav2)\
Small Wikipedia dump. Unclean, contains many unwanted tags.
exllamav2 calibration data taken from:\
https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
## How to quantize using an imatrix, with llama.cpp
1. Get one of the input files collected here, or elsewhere.
2. Convert or download the model you want to quantise, in fp16 GGUF format.
3. Generate an imatrix file specific to the model you want to quantise
```
cd <llama.cpp directory>
./imatrix -m <model_path>/ggml-model-f16.gguf -f <plain_text_matrix_file> -o <output.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
# -t 12 : number of threads (should probably match no of cpu)
# -c 512 : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model)
# -b 200 : batch size (default=512)
# --chunks 100 (recommended)
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
```
4. Use the generated matrix file to quantise the model (see further down for some pre-computed matrix files)
```
./quantize --imatrix <output.matrix> <model_path>/ggml-model-f16.gguf <quantisation_level, eg:IQ4_XS>
```
Note: normal quantisation also benefits from using a matrix file. It also seem that a bigger input matrix is
better for higher quantisation.
### Pre-computed matrix files
Since generating a matrix files takes time and requires significant processing power and memory,
some kind folks have made available pre-computed matrix files. You can use those directly in the quantize process.
However, remember they can only be used for the specific model mentioned, and no other.
[Joseph717171/Imatrices](https://huggingface.co/Joseph717171/Imatrices) a growing list of many matrix files for 7B to 17B models
[ikawrakow/imatrix-from-wiki-train](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) matrix files for base models (lama, mistral, nous-hermes, qwen) trained on `wiki.train.raw` |
kayhal/naruda_poems | ---
task_categories:
- text-generation
--- |
presencesw/dataset3_translated_not_cleaned | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: references
sequence: string
- name: question_vi
dtype: string
- name: answer_vi
dtype: string
- name: references_vi
sequence: string
splits:
- name: train
num_bytes: 4050525.4273333335
num_examples: 673
download_size: 2039871
dataset_size: 4050525.4273333335
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Flatten-Math-Shepherd_0.8_2.0_-2.0_False | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: next_prompt
dtype: string
- name: next_response
dtype: string
- name: label
dtype: string
- name: question
dtype: string
- name: step
dtype: int64
- name: trajectory
dtype: int64
- name: mask
dtype: int64
- name: reward
dtype: float64
- name: mc_values
dtype: float64
splits:
- name: train
num_bytes: 4279469183
num_examples: 2482945
- name: test
num_bytes: 491798737
num_examples: 283159
download_size: 883504505
dataset_size: 4771267920
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CyberHarem/kayoko_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kayoko/鬼方カヨコ/佳代子 (Blue Archive)
This is the dataset of kayoko/鬼方カヨコ/佳代子 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `black_hair, horns, multicolored_hair, white_hair, two-tone_hair, red_eyes, halo, demon_horns, hair_between_eyes, long_hair, black_horns, ear_piercing, breasts, wings, single_wing, ponytail`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.03 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kayoko_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 847.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kayoko_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1334 | 1.74 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kayoko_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kayoko_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 15 |  |  |  |  |  | 1girl, alternate_costume, black_choker, black_dress, earclip, solo, closed_mouth, collarbone, looking_at_viewer, piercing, blush, long_sleeves, small_breasts, side_slit, hairclip, simple_background, white_background, cowboy_shot, grey_halo, off_shoulder, sleeveless_dress, cardigan, earrings, black_nails, covered_navel, jacket |
| 1 | 58 |  |  |  |  |  | 1girl, official_alternate_costume, black_kimono, obi, solo, looking_at_viewer, hair_flower, hairclip, black_choker, long_sleeves, closed_mouth, wide_sleeves, piercing, blush, simple_background, smile, white_background, white_pupils, holding, upper_body |
| 2 | 15 |  |  |  |  |  | 1girl, looking_at_viewer, solo, closed_mouth, black_choker, black_hoodie, upper_body, collarbone, simple_background, blush, mole_on_neck, piercing, white_background |
| 3 | 8 |  |  |  |  |  | 1girl, black_hoodie, long_sleeves, looking_at_viewer, miniskirt, plaid_skirt, pleated_skirt, red_skirt, simple_background, solo, white_background, black_choker, closed_mouth, hands_in_pockets, bag, blush, cowboy_shot, hood_down, low_wings, piercing |
| 4 | 9 |  |  |  |  |  | 1girl, holding_umbrella, long_sleeves, solo, black_hoodie, blush, closed_mouth, looking_at_viewer, pleated_skirt, black_choker, plaid_skirt, rain, red_skirt, smile, outdoors, squatting, transparent_umbrella, backpack |
| 5 | 5 |  |  |  |  |  | 1girl, black_bikini, looking_at_viewer, navel, solo, stomach, alternate_costume, bare_shoulders, black_choker, blush, collarbone, demon_wings, small_breasts, closed_mouth, low_wings, side-tie_bikini_bottom, sitting, toes, bright_pupils, full_body, hand_up, legs, picnic_basket, piercing, sandals |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | alternate_costume | black_choker | black_dress | earclip | solo | closed_mouth | collarbone | looking_at_viewer | piercing | blush | long_sleeves | small_breasts | side_slit | hairclip | simple_background | white_background | cowboy_shot | grey_halo | off_shoulder | sleeveless_dress | cardigan | earrings | black_nails | covered_navel | jacket | official_alternate_costume | black_kimono | obi | hair_flower | wide_sleeves | smile | white_pupils | holding | upper_body | black_hoodie | mole_on_neck | miniskirt | plaid_skirt | pleated_skirt | red_skirt | hands_in_pockets | bag | hood_down | low_wings | holding_umbrella | rain | outdoors | squatting | transparent_umbrella | backpack | black_bikini | navel | stomach | bare_shoulders | demon_wings | side-tie_bikini_bottom | sitting | toes | bright_pupils | full_body | hand_up | legs | picnic_basket | sandals |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:---------------|:--------------|:----------|:-------|:---------------|:-------------|:--------------------|:-----------|:--------|:---------------|:----------------|:------------|:-----------|:--------------------|:-------------------|:--------------|:------------|:---------------|:-------------------|:-----------|:-----------|:--------------|:----------------|:---------|:-----------------------------|:---------------|:------|:--------------|:---------------|:--------|:---------------|:----------|:-------------|:---------------|:---------------|:------------|:--------------|:----------------|:------------|:-------------------|:------|:------------|:------------|:-------------------|:-------|:-----------|:------------|:-----------------------|:-----------|:---------------|:--------|:----------|:-----------------|:--------------|:-------------------------|:----------|:-------|:----------------|:------------|:----------|:-------|:----------------|:----------|
| 0 | 15 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 58 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | X | X | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 15 |  |  |  |  |  | X | | X | | | X | X | X | X | X | X | | | | | X | X | | | | | | | | | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 8 |  |  |  |  |  | X | | X | | | X | X | | X | X | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 4 | 9 |  |  |  |  |  | X | | X | | | X | X | | X | | X | X | | | | | | | | | | | | | | | | | | | | X | | | | X | | | X | X | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | X | | | X | X | X | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/db67d073 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 190
num_examples: 10
download_size: 1329
dataset_size: 190
---
# Dataset Card for "db67d073"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
airedefined/rm-gpt4llm-mix | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 3658310.1331026736
num_examples: 5400
- name: test
num_bytes: 406478.9036780748
num_examples: 600
download_size: 2463907
dataset_size: 4064789.0367807485
---
# Dataset Card for "gpt4llm-rm-training-6k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ouvic215/Test_Dataset_0213_hf | ---
dataset_info:
features:
- name: mask_image
dtype: image
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 147332332.0
num_examples: 1588
download_size: 146499523
dataset_size: 147332332.0
---
# Dataset Card for "Test_Dataset_0213_hf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jmonsa13/bonito_privacy_qa_sft_data | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2093268
num_examples: 7830
- name: test
num_bytes: 530688
num_examples: 1958
download_size: 1061562
dataset_size: 2623956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CJWeiss/LGZ_inabs | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
- name: cluster
dtype: string
- name: old_id
dtype: int64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 5501904
num_examples: 50
download_size: 2682006
dataset_size: 5501904
---
# Dataset Card for "LGZ_inabs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/FairFace_support_query_sets | Invalid username or password. |
distilled-one-sec-cv12-each-chunk-uniq/chunk_147 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1081230288.0
num_examples: 210684
download_size: 1106763443
dataset_size: 1081230288.0
---
# Dataset Card for "chunk_147"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alt | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- bn
- en
- fil
- hi
- id
- ja
- km
- lo
- ms
- my
- th
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
- token-classification
task_ids:
- parsing
paperswithcode_id: alt
pretty_name: Asian Language Treebank
config_names:
- alt-en
- alt-jp
- alt-km
- alt-my
- alt-my-transliteration
- alt-my-west-transliteration
- alt-parallel
dataset_info:
- config_name: alt-en
features:
- name: SNT.URLID
dtype: string
- name: SNT.URLID.SNTID
dtype: string
- name: url
dtype: string
- name: status
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 10075569
num_examples: 17889
- name: validation
num_bytes: 544719
num_examples: 988
- name: test
num_bytes: 567272
num_examples: 1017
download_size: 3781814
dataset_size: 11187560
- config_name: alt-jp
features:
- name: SNT.URLID
dtype: string
- name: SNT.URLID.SNTID
dtype: string
- name: url
dtype: string
- name: status
dtype: string
- name: value
dtype: string
- name: word_alignment
dtype: string
- name: jp_tokenized
dtype: string
- name: en_tokenized
dtype: string
splits:
- name: train
num_bytes: 21888277
num_examples: 17202
- name: validation
num_bytes: 1181555
num_examples: 953
- name: test
num_bytes: 1175592
num_examples: 931
download_size: 10355366
dataset_size: 24245424
- config_name: alt-km
features:
- name: SNT.URLID
dtype: string
- name: SNT.URLID.SNTID
dtype: string
- name: url
dtype: string
- name: km_pos_tag
dtype: string
- name: km_tokenized
dtype: string
splits:
- name: train
num_bytes: 12015371
num_examples: 18088
- name: validation
num_bytes: 655212
num_examples: 1000
- name: test
num_bytes: 673733
num_examples: 1018
download_size: 4344096
dataset_size: 13344316
- config_name: alt-my
features:
- name: SNT.URLID
dtype: string
- name: SNT.URLID.SNTID
dtype: string
- name: url
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 20433243
num_examples: 18088
- name: validation
num_bytes: 1111394
num_examples: 1000
- name: test
num_bytes: 1135193
num_examples: 1018
download_size: 6569025
dataset_size: 22679830
- config_name: alt-my-transliteration
features:
- name: en
dtype: string
- name: my
sequence: string
splits:
- name: train
num_bytes: 4249316
num_examples: 84022
download_size: 2163951
dataset_size: 4249316
- config_name: alt-my-west-transliteration
features:
- name: en
dtype: string
- name: my
sequence: string
splits:
- name: train
num_bytes: 7411911
num_examples: 107121
download_size: 2857511
dataset_size: 7411911
- config_name: alt-parallel
features:
- name: SNT.URLID
dtype: string
- name: SNT.URLID.SNTID
dtype: string
- name: url
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- en
- en_tok
- fil
- hi
- id
- ja
- khm
- lo
- ms
- my
- th
- vi
- zh
splits:
- name: train
num_bytes: 68445916
num_examples: 18088
- name: validation
num_bytes: 3710979
num_examples: 1000
- name: test
num_bytes: 3814431
num_examples: 1019
download_size: 34707907
dataset_size: 75971326
configs:
- config_name: alt-en
data_files:
- split: train
path: alt-en/train-*
- split: validation
path: alt-en/validation-*
- split: test
path: alt-en/test-*
- config_name: alt-jp
data_files:
- split: train
path: alt-jp/train-*
- split: validation
path: alt-jp/validation-*
- split: test
path: alt-jp/test-*
- config_name: alt-km
data_files:
- split: train
path: alt-km/train-*
- split: validation
path: alt-km/validation-*
- split: test
path: alt-km/test-*
- config_name: alt-my
data_files:
- split: train
path: alt-my/train-*
- split: validation
path: alt-my/validation-*
- split: test
path: alt-my/test-*
- config_name: alt-my-transliteration
data_files:
- split: train
path: alt-my-transliteration/train-*
- config_name: alt-my-west-transliteration
data_files:
- split: train
path: alt-my-west-transliteration/train-*
- config_name: alt-parallel
data_files:
- split: train
path: alt-parallel/train-*
- split: validation
path: alt-parallel/validation-*
- split: test
path: alt-parallel/test-*
default: true
---
# Dataset Card for Asian Language Treebank (ALT)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
- **Leaderboard:**
- **Paper:** [Introduction of the Asian Language Treebank](https://ieeexplore.ieee.org/abstract/document/7918974)
- **Point of Contact:** [ALT info](alt-info@khn.nict.go.jp)
### Dataset Summary
The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under [ASEAN IVO](https://www.nict.go.jp/en/asean_ivo/index.html) as described in this Web page.
The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.
### Supported Tasks and Leaderboards
Machine Translation, Dependency Parsing
### Languages
It supports 13 language:
* Bengali
* English
* Filipino
* Hindi
* Bahasa Indonesia
* Japanese
* Khmer
* Lao
* Malay
* Myanmar (Burmese)
* Thai
* Vietnamese
* Chinese (Simplified Chinese).
## Dataset Structure
### Data Instances
#### ALT Parallel Corpus
```
{
"SNT.URLID": "80188",
"SNT.URLID.SNTID": "1",
"url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
"bg": "[translated sentence]",
"en": "[translated sentence]",
"en_tok": "[translated sentence]",
"fil": "[translated sentence]",
"hi": "[translated sentence]",
"id": "[translated sentence]",
"ja": "[translated sentence]",
"khm": "[translated sentence]",
"lo": "[translated sentence]",
"ms": "[translated sentence]",
"my": "[translated sentence]",
"th": "[translated sentence]",
"vi": "[translated sentence]",
"zh": "[translated sentence]"
}
```
#### ALT Treebank
```
{
"SNT.URLID": "80188",
"SNT.URLID.SNTID": "1",
"url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
"status": "draft/reviewed",
"value": "(S (S (BASENP (NNP Italy)) (VP (VBP have) (VP (VP (VP (VBN defeated) (BASENP (NNP Portugal))) (ADVP (RB 31-5))) (PP (IN in) (NP (BASENP (NNP Pool) (NNP C)) (PP (IN of) (NP (BASENP (DT the) (NN 2007) (NNP Rugby) (NNP World) (NNP Cup)) (PP (IN at) (NP (BASENP (NNP Parc) (FW des) (NNP Princes)) (COMMA ,) (BASENP (NNP Paris) (COMMA ,) (NNP France))))))))))) (PERIOD .))"
}
```
#### ALT Myanmar transliteration
```
{
"en": "CASINO",
"my": [
"ကက်စီနို",
"ကစီနို",
"ကာစီနို",
"ကာဆီနို"
]
}
```
### Data Fields
#### ALT Parallel Corpus
- SNT.URLID: URL link to the source article listed in [URL.txt](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt)
- SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from `SNT.URLID`
and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language
#### ALT Treebank
- status: it indicates how a sentence is annotated; `draft` sentences are annotated by one annotater and `reviewed` sentences are annotated by two annotater
The annotatation is different from language to language, please see [their guildlines](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/) for more detail.
### Data Splits
| | train | valid | test |
|-----------|-------|-------|-------|
| # articles | 1698 | 98 | 97 |
| # sentences | 18088 | 1000 | 1018 |
## Dataset Creation
### Curation Rationale
The ALT project was initiated by the [National Institute of Information and Communications Technology, Japan](https://www.nict.go.jp/en/) (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from
* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
* the Institute for Infocomm Research, Singapore (I2R) for Malay
* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
* the Institute for Infocomm Research, Singapore (I2R) for Malay
* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
Please cite the following if you make use of the dataset:
Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA.
BibTeX:
```
@inproceedings{riza2016introduction,
title={Introduction of the asian language treebank},
author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
pages={1--6},
year={2016},
organization={IEEE}
}
```
### Contributions
Thanks to [@chameleonTK](https://github.com/chameleonTK) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.