datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
liuyanchen1015/MULTI_VALUE_sst2_object_pronoun_drop | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 9617
num_examples: 65
- name: test
num_bytes: 20095
num_examples: 136
- name: train
num_bytes: 316207
num_examples: 2898
download_size: 181968
dataset_size: 345919
---
# Dataset Card for "MULTI_VALUE_sst2_object_pronoun_drop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edbeeching/prj_gia_dataset_atari_2B_atari_gopher_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_gopher environment, sample for the policy atari_2B_atari_gopher_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-html-2000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 655017
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-samsum-samsum-52efcb-93192145784 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: sshleifer/distilbart-xsum-12-6
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-12-6
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sasha](https://huggingface.co/sasha) for evaluating this model. |
RBTL/Erotico | ---
license: openrail
---
|
razaulhaq/nhtsa_complaints | ---
license: mit
---
|
llm4pm/process_mining_questions | ---
license: gpl-2.0
language:
- en
--- |
SEACrowd/unimorph_id | ---
tags:
- morphological-inflection
language:
- ind
---
# unimorph_id
The UniMorph project, Indonesian chapter.
Due to sparsity of UniMorph original parsing, raw source is used instead.
Original parsing can be found on https://huggingface.co/datasets/universal_morphologies/blob/2.3.2/universal_morphologies.py
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{pimentel-ryskina-etal-2021-sigmorphon,
title = "SIGMORPHON 2021 Shared Task on Morphological Reinflection: Generalization Across Languages",
author = "Pimentel, Tiago and
Ryskina, Maria and
Mielke, Sabrina J. and
Wu, Shijie and
Chodroff, Eleanor and
Leonard, Brian and
Nicolai, Garrett and
Ghanggo Ate, Yustinus and
Khalifa, Salam and
Habash, Nizar and
El-Khaissi, Charbel and
Goldman, Omer and
Gasser, Michael and
Lane, William and
Coler, Matt and
Oncevay, Arturo and
Montoya Samame, Jaime Rafael and
Silva Villegas, Gema Celeste and
Ek, Adam and
Bernardy, Jean-Philippe and
Shcherbakov, Andrey and
Bayyr-ool, Aziyana and
Sheifer, Karina and
Ganieva, Sofya and
Plugaryov, Matvey and
Klyachko, Elena and
Salehi, Ali and
Krizhanovsky, Andrew and
Krizhanovsky, Natalia and
Vania, Clara and
Ivanova, Sardana and
Salchak, Aelita and
Straughn, Christopher and
Liu, Zoey and
Washington, Jonathan North and
Ataman, Duygu and
Kiera{'s}, Witold and
Woli{'n}ski, Marcin and
Suhardijanto, Totok and
Stoehr, Niklas and
Nuriah, Zahroh and
Ratan, Shyam and
Tyers, Francis M. and
Ponti, Edoardo M. and
Aiton, Grant and
Hatcher, Richard J. and
Prud'hommeaux, Emily and
Kumar, Ritesh and
Hulden, Mans and
Barta, Botond and
Lakatos, Dorina and
Szolnok, G{'a}bor and
{'A}cs, Judit and
Raj, Mohit and
Yarowsky, David and
Cotterell, Ryan and
Ambridge, Ben and
Vylomova, Ekaterina",
booktitle = "Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.sigmorphon-1.25",
doi = "10.18653/v1/2021.sigmorphon-1.25",
pages = "229--259"
}
```
## License
Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
## Homepage
[https://github.com/unimorph/ind](https://github.com/unimorph/ind)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
heliosprime/twitter_dataset_1713041352 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 13588
num_examples: 31
download_size: 9228
dataset_size: 13588
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713041352"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jags/floral | ---
license: mit
---
This is a floral dataset to train text inversion in Stable diffusion and being added here for future reference and additional implementation. |
IWSLT/IWSLT.OfflineTask | ---
license: cc-by-nc-nd-4.0
task_categories:
- translation
- automatic-speech-recognition
language:
- en
- de
pretty_name: IWSLT Offline task Test Sets
size_categories:
- 1K<n<10K
--- |
sarahyun/your_dataset_name | ---
dataset_info:
features: []
splits:
- name: train
- name: validation
download_size: 0
dataset_size: 0
---
# Dataset Card for "your_dataset_name"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Paul/hatecheck-portuguese | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Portuguese HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. |
liuyanchen1015/MULTI_VALUE_mrpc_our_we | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 8781
num_examples: 30
- name: train
num_bytes: 12766
num_examples: 44
- name: validation
num_bytes: 2540
num_examples: 9
download_size: 28495
dataset_size: 24087
---
# Dataset Card for "MULTI_VALUE_mrpc_our_we"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_banana_sgosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 123760000
num_examples: 10000
- name: validation
num_bytes: 123760000
num_examples: 10000
download_size: 49313232
dataset_size: 247520000
---
# Dataset Card for "autotree_pmlb_banana_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
datajuicer/redpajama-c4-refined-by-data-juicer | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- data-juicer
- pretraining
size_categories:
- 100M<n<1B
---
# RedPajama -- C4 (refined by Data-Juicer)
A refined version of C4 dataset in RedPajama by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
**Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/redpajama-c4-refine-result.jsonl) (About 832GB).
## Dataset Information
- Number of samples: 344,491,171 (Keep ~94.42% from the original dataset)
## Refining Recipe
```yaml
# global parameters
project_name: 'Data-Juicer-recipes-c4'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl' # path to your dataset result file
np: 50 # number of subprocess to process your dataset
open_tracer: True
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- alphanumeric_filter:
tokenization: false
min_ratio: 0.65 # <3sigma (0.740)
max_ratio: 0.9 # >3sigma (0.867)
- average_line_length_filter: # for code
max_len: 3000 # >3sigma (1277)
- character_repetition_filter:
rep_len: 10
max_ratio: 0.3 # >3sigma (0.168)
- language_id_score_filter:
min_score: 0.6
- maximum_line_length_filter: # for code
max_len: 4000 # >3sigma (2017)
- perplexity_filter:
lang: en
max_ppl: 6000 #(>3sigma 4543)
- special_characters_filter:
max_ratio: 0.4 # > 3sigma (0.303)
- words_num_filter:
tokenization: true
min_num: 20
max_num: 10000
- word_repetition_filter:
lang: en
tokenization: true
rep_len: 10
max_ratio: 0.231 # 3sigma
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4
``` |
LexiconShiftInnovations/SinhalaSubtitlesDataset | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 53748358
num_examples: 797375
download_size: 23676407
dataset_size: 53748358
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mesolitica/translated-MMLU | ---
language:
- ms
---
# Translated MMLU
Originally from https://huggingface.co/datasets/cais/mmlu, translated to Malay using Google Translate.
## Precaution
1. We found out some translated answers not really coherent with original English answers, so it is better to skip translated answers. |
UnbiasedMoldInspectionsIN/7thTry | ---
license: apache-2.0
---
|
iara-project/news-articles-ptbr-dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: category
dtype: string
- name: category_natural_language
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 628987914
num_examples: 176114
- name: test
num_bytes: 627415372
num_examples: 176114
download_size: 770300096
dataset_size: 1256403286
---
# Dataset Card for "news-articles-ptbr-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rishthak/album-genres-rap | ---
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1444944.0
num_examples: 10
download_size: 1446235
dataset_size: 1444944.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reddit_tifu | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Reddit TIFU
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: reddit-tifu
tags:
- reddit-posts-summarization
dataset_info:
- config_name: short
features:
- name: ups
dtype: float32
- name: num_comments
dtype: float32
- name: upvote_ratio
dtype: float32
- name: score
dtype: float32
- name: documents
dtype: string
- name: tldr
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 137715925
num_examples: 79740
download_size: 670607856
dataset_size: 137715925
- config_name: long
features:
- name: ups
dtype: float32
- name: num_comments
dtype: float32
- name: upvote_ratio
dtype: float32
- name: score
dtype: float32
- name: documents
dtype: string
- name: tldr
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 91984758
num_examples: 42139
download_size: 670607856
dataset_size: 91984758
---
# Dataset Card for "reddit_tifu"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/ctr4si/MMN](https://github.com/ctr4si/MMN)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.34 GB
- **Size of the generated dataset:** 229.76 MB
- **Total amount of disk used:** 1.57 GB
### Dataset Summary
Reddit dataset, where TIFU denotes the name of subbreddit /r/tifu.
As defined in the publication, style "short" uses title as summary and
"long" uses tldr as summary.
Features includes:
- document: post text without tldr.
- tldr: tldr line.
- title: trimmed title without tldr.
- ups: upvotes.
- score: score.
- num_comments: number of comments.
- upvote_ratio: upvote ratio.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### long
- **Size of downloaded dataset files:** 670.61 MB
- **Size of the generated dataset:** 92.00 MB
- **Total amount of disk used:** 762.62 MB
An example of 'train' looks as follows.
```
{'ups': 115.0,
'num_comments': 23.0,
'upvote_ratio': 0.88,
'score': 115.0,
'documents': 'this actually happened a couple of years ago. i grew up in germany where i went to a german secondary school that went from 5th to 13th grade (we still had 13 grades then, they have since changed that). my school was named after anne frank and we had a club that i was very active in from 9th grade on, which was dedicated to teaching incoming 5th graders about anne franks life, discrimination, anti-semitism, hitler, the third reich and that whole spiel. basically a day where the students\' classes are cancelled and instead we give them an interactive history and social studies class with lots of activities and games. \n\nthis was my last year at school and i already had a lot of experience doing these project days with the kids. i was running the thing with a friend, so it was just the two of us and 30-something 5th graders. we start off with a brief introduction and brainstorming: what do they know about anne frank and the third reich? you\'d be surprised how much they know. anyway after the brainstorming we do a few activities, and then we take a short break. after the break we split the class into two groups to make it easier to handle. one group watches a short movie about anne frank while the other gets a tour through our poster presentation that our student group has been perfecting over the years. then the groups switch. \n\ni\'m in the classroom to show my group the movie and i take attendance to make sure no one decided to run away during break. i\'m going down the list when i come to the name sandra (name changed). a kid with a boyish haircut and a somewhat deeper voice, wearing clothes from the boy\'s section at a big clothing chain in germany, pipes up. \n\nnow keep in mind, these are all 11 year olds, they are all pre-pubescent, their bodies are not yet showing any sex specific features one would be able to see while they are fully clothed (e.g. boobs, beards,...). this being a 5th grade in the rather conservative (for german standards) bavaria, i was confused. i looked down at the list again making sure i had read the name right. look back up at the kid. \n\nme: "you\'re sandra?"\n\nkid: "yep."\n\nme: "oh, sorry. *thinking the kid must be from somewhere where sandra is both a girl\'s and boy\'s name* where are you from? i\'ve only ever heard that as a girl\'s name before."\n\nthe class starts laughing. sandra gets really quiet. "i am a girl..." she says. some of the other students start saying that their parents made the same mistake when they met sandra. i feel so sorry and stupid. i get the class to calm down and finish taking attendance. we watch the movie in silence. after the movie, when we walked down to where the poster presentation took place i apologised to sandra. i felt so incredibly terrible, i still do to this day. throughout the rest of the day i heard lots of whispers about sandra. i tried to stop them whenever they came up, but there was no stopping the 5th grade gossip i had set in motion.\n\nsandra, if you\'re out there, i am so incredibly sorry for humiliating you in front of your class. i hope you are happy and healthy and continue to live your life the way you like. don\'t let anyone tell you you have to dress or act a certain way just because of the body parts you were born with. i\'m sorry if i made you feel like you were wrong for dressing and acting differently. i\'m sorry i probably made that day hell for you. i\'m sorry for my ignorance.',
'tldr': 'confuse a 5th grade girl for a boy in front of half of her class. kids are mean. sorry sandra.**',
'title': 'gender-stereotyping'}
```
#### short
- **Size of downloaded dataset files:** 670.61 MB
- **Size of the generated dataset:** 137.75 MB
- **Total amount of disk used:** 808.37 MB
An example of 'train' looks as follows.
```
{'ups': 50.0,
'num_comments': 13.0,
'upvote_ratio': 0.77,
'score': 50.0,
'documents': "i was on skype on my tablet as i went to the toilet iming a friend. i don't multitask very well, so i forgot one of the most important things to do before pooping. i think the best part was when i realised and told my mate who just freaked out because i was talking to him on the john!",
'tldr': '',
'title': 'forgetting to pull my underwear down before i pooped.'}
```
### Data Fields
The data fields are the same among all splits.
#### long
- `ups`: a `float32` feature.
- `num_comments`: a `float32` feature.
- `upvote_ratio`: a `float32` feature.
- `score`: a `float32` feature.
- `documents`: a `string` feature.
- `tldr`: a `string` feature.
- `title`: a `string` feature.
#### short
- `ups`: a `float32` feature.
- `num_comments`: a `float32` feature.
- `upvote_ratio`: a `float32` feature.
- `score`: a `float32` feature.
- `documents`: a `string` feature.
- `tldr`: a `string` feature.
- `title`: a `string` feature.
### Data Splits
|name |train|
|-----|----:|
|long |42139|
|short|79740|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
MIT License.
### Citation Information
```
@misc{kim2018abstractive,
title={Abstractive Summarization of Reddit Posts with Multi-level Memory Networks},
author={Byeongchang Kim and Hyunwoo Kim and Gunhee Kim},
year={2018},
eprint={1811.00783},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
open-source-metrics/transformers-dependents | ---
license: apache-2.0
pretty_name: transformers metrics
tags:
- github-stars
---
# transformers metrics
This dataset contains metrics about the huggingface/transformers package.
Number of repositories in the dataset: 27067
Number of packages in the dataset: 823
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/transformers/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 65 packages that have more than 1000 stars.
There are 140 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26958
[fastai/fastai](https://github.com/fastai/fastai): 22774
[slundberg/shap](https://github.com/slundberg/shap): 17482
[fastai/fastbook](https://github.com/fastai/fastbook): 16052
[jina-ai/jina](https://github.com/jina-ai/jina): 16052
[huggingface/datasets](https://github.com/huggingface/datasets): 14101
[microsoft/recommenders](https://github.com/microsoft/recommenders): 14017
[borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12872
[flairNLP/flair](https://github.com/flairNLP/flair): 12033
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70487
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26959
[ageron/handson-ml2](https://github.com/ageron/handson-ml2): 22886
[ray-project/ray](https://github.com/ray-project/ray): 22047
[jina-ai/jina](https://github.com/jina-ai/jina): 16052
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14844
[microsoft/recommenders](https://github.com/microsoft/recommenders): 14017
[deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 12617
[flairNLP/flair](https://github.com/flairNLP/flair): 12034
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 55 packages that have more than 200 forks.
There are 128 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7388
[fastai/fastai](https://github.com/fastai/fastai): 7297
[fastai/fastbook](https://github.com/fastai/fastbook): 6033
[slundberg/shap](https://github.com/slundberg/shap): 2646
[microsoft/recommenders](https://github.com/microsoft/recommenders): 2473
[allenai/allennlp](https://github.com/allenai/allennlp): 2218
[jina-ai/clip-as-service](https://github.com/jina-ai/clip-as-service): 1972
[jina-ai/jina](https://github.com/jina-ai/jina): 1967
[flairNLP/flair](https://github.com/flairNLP/flair): 1934
[huggingface/datasets](https://github.com/huggingface/datasets): 1841
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16159
[ageron/handson-ml2](https://github.com/ageron/handson-ml2): 11053
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7389
[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493
[deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 4933
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4106
[ray-project/ray](https://github.com/ray-project/ray): 3876
[apache/beam](https://github.com/apache/beam): 3648
[plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795
[microsoft/recommenders](https://github.com/microsoft/recommenders): 2473
|
Ayush2312/2kTherapydataset_formatted | ---
dataset_info:
features:
- name: train
dtype: string
splits:
- name: train
num_bytes: 8265127
num_examples: 2000
download_size: 4164377
dataset_size: 8265127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mtc/factcc_annotated_eval_data | ---
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: string
- name: filepath
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: validation
num_bytes: 3261639
num_examples: 931
- name: test
num_bytes: 2060131
num_examples: 503
download_size: 1191194
dataset_size: 5321770
---
# Dataset Card for "factcc_annotated_eval_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anirudhlakhotia/baarat-romhi-hi-200k | ---
dataset_info:
features:
- name: data
struct:
- name: Source_Language
dtype: string
- name: Target_Language
dtype: string
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 243861047.0987602
num_examples: 200000
download_size: 121016217
dataset_size: 243861047.0987602
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
albertmartinez/OSDG | ---
license: mit
task_categories:
- text-classification
pretty_name: + OSDG Community Dataset (OSDG-CD)
dataset_info:
- config_name: '2023-07-01'
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': sdg1
'1': sdg2
'2': sdg3
'3': sdg4
'4': sdg5
'5': sdg6
'6': sdg7
'7': sdg8
'8': sdg9
'9': sdg10
'10': sdg11
'11': sdg12
'12': sdg13
'13': sdg14
'14': sdg15
'15': sdg16
splits:
- name: train
num_bytes: 18821023
num_examples: 29445
- name: test
num_bytes: 8033142
num_examples: 12620
download_size: 16259463
dataset_size: 26854165
- config_name: '2024-01-01'
default: true
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': sdg1
'1': sdg2
'2': sdg3
'3': sdg4
'4': sdg5
'5': sdg6
'6': sdg7
'7': sdg8
'8': sdg9
'9': sdg10
'10': sdg11
'11': sdg12
'12': sdg13
'13': sdg14
'14': sdg15
'15': sdg16
splits:
- name: train
num_bytes: 19083808
num_examples: 29844
- name: test
num_bytes: 8107107
num_examples: 12791
download_size: 16476873
dataset_size: 27190915
configs:
- config_name: '2023-07-01'
data_files:
- split: train
path: 2023-07-01/train-*
- split: test
path: 2023-07-01/test-*
- config_name: '2024-01-01'
default: true
data_files:
- split: train
path: 2024-01-01/train-*
- split: test
path: 2024-01-01/test-*
tags:
- SDG
---
https://zenodo.org/records/10579179 |
Cohere/miracl-yo-queries-22-12 | ---
annotations_creators:
- expert-generated
language:
- yo
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-yo-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
intm/codet5_go-generation | ---
license: apache-2.0
---
max_src_len = 512, max_trg_len = 256
|
galman33/gal_yair_83000_1664x832 | ---
dataset_info:
features:
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: country_code
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 12963511218.0
num_examples: 83000
download_size: 14150729267
dataset_size: 12963511218.0
---
# Dataset Card for "gal_yair_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alujjdnd/Reddit-US-UK | ---
license: mit
datasets:
- reddit
language:
- en
---
# Reddit US UK Subreddits Dataset
This repository contains data from Reddit, from the subreddits of the **fifty (50) US states**, and the **ten (10) UK cities** listed below:
1. London
2. Manchester
3. Birmingham
4. Leeds-Bradford
5. Glasgow
6. Southampton-Portsmouth
7. Liverpool
8. Newcastle
9. Nottingham
10. Sheffield
In addition, r/CasualUK is also included in this dataset.
All data are sourced from the following data source: https://academictorrents.com/details/c398a571976c78d346c325bd75c47b82edf6124e
The data spans from 2005-06 start of month to 2022-12 end of month. The suffix "submissions" denotes that the data contains posts, and the suffic "comments" denotes the comments in the various subreddits.
The data is compressed in the zst format, and the uncompressed raw data exists in the format of JSON. |
seyonec/goodscents_leffingwell | ---
license: mit
task_categories:
- graph-ml
tags:
- chemistry
--- |
CyberHarem/circe_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of circe/キルケー/喀耳刻 (Fate/Grand Order)
This is the dataset of circe/キルケー/喀耳刻 (Fate/Grand Order), containing 386 images and their tags.
The core tags of this character are `pointy_ears, wings, head_wings, pink_hair, feathered_wings, long_hair, breasts, small_breasts, multicolored_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 386 | 531.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/circe_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 386 | 468.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/circe_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 905 | 875.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/circe_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/circe_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, white_skirt, armlet, bracelet, holding_staff, sleeveless, smile, bare_shoulders, cowboy_shot, navel, necklace, thighlet, miniskirt, pink_eyes, simple_background, white_background |
| 1 | 6 |  |  |  |  |  | 1girl, bracelet, looking_at_viewer, navel, solo, necklace, simple_background, white_background, white_skirt, holding_staff, smile, sleeveless |
| 2 | 8 |  |  |  |  |  | 1girl, blush, solo, necklace, open_mouth, upper_body, brown_wings, :d, facing_viewer, ^_^, bracelet, collarbone, pig |
| 3 | 6 |  |  |  |  |  | 1girl, hetero, bestiality, blush, crying, saliva, sex_from_behind, tears, doggystyle, necklace, animal, bracelet, clenched_teeth, cum, open_mouth, pig, rape, solo_focus, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | white_skirt | armlet | bracelet | holding_staff | sleeveless | smile | bare_shoulders | cowboy_shot | navel | necklace | thighlet | miniskirt | pink_eyes | simple_background | white_background | blush | open_mouth | upper_body | brown_wings | :d | facing_viewer | ^_^ | collarbone | pig | hetero | bestiality | crying | saliva | sex_from_behind | tears | doggystyle | animal | clenched_teeth | cum | rape | solo_focus |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:--------------|:---------|:-----------|:----------------|:-------------|:--------|:-----------------|:--------------|:--------|:-----------|:-----------|:------------|:------------|:--------------------|:-------------------|:--------|:-------------|:-------------|:--------------|:-----|:----------------|:------|:-------------|:------|:---------|:-------------|:---------|:---------|:------------------|:--------|:-------------|:---------|:-----------------|:------|:-------|:-------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | | X | X | X | X | | | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | X | | | X | | | | | | | X | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | | | X | | | | | | | X | | | | | X | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Deojoandco/capstone_fromgpt_without_gold_v11_all | ---
dataset_info:
features:
- name: dialog_id
dtype: int64
- name: dialogue
dtype: string
- name: summary
dtype: string
- name: gold_tags
dtype: string
- name: gpt_success
dtype: bool
- name: gpt_response
dtype: string
- name: gold_tags_tokens_count
dtype: int64
- name: GPT_TAGS_FOUND
dtype: bool
- name: gpt_output_tags
dtype: string
- name: gpt_output_tag_tokens_count
dtype: int64
- name: GPT_MI_FOUND
dtype: bool
- name: gpt_tags_token_count
dtype: int64
- name: gpt_tags
dtype: string
- name: tag_token_count_match
dtype: bool
- name: precision
dtype: float64
- name: recall
dtype: float64
- name: f1
dtype: float64
- name: accuracy
dtype: float64
splits:
- name: validation
num_bytes: 23400
num_examples: 12
- name: test
num_bytes: 14700
num_examples: 12
download_size: 45072
dataset_size: 38100
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "capstone_fromgpt_without_gold_v11_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ibranze/araproje_hellaswag_tr_conf_gpt2_bestscore_is | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 162703.0
num_examples: 250
download_size: 0
dataset_size: 162703.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "araproje_hellaswag_tr_conf_gpt2_bestscore_is"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
niv-al/sq-anli_a2 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 10416951
num_examples: 30000
- name: validation
num_bytes: 49978
num_examples: 144
- name: test
num_bytes: 51667
num_examples: 144
download_size: 5905662
dataset_size: 10518596
language:
- sq
---
# Dataset Card for "sq-anli_a2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reciprocate/synth | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 7294606
num_examples: 2374
- name: test
num_bytes: 661088
num_examples: 202
download_size: 1651895
dataset_size: 7955694
---
# Dataset Card for "synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_stabilityai__StableBeluga2 | ---
pretty_name: Evaluation run of stabilityai/StableBeluga2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__StableBeluga2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T10:41:03.838240](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga2/blob/main/results_2023-10-15T10-41-03.838240.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4326761744966443,\n\
\ \"em_stderr\": 0.005073838660621812,\n \"f1\": 0.5027527265100691,\n\
\ \"f1_stderr\": 0.0048086605803724005,\n \"acc\": 0.5940617757706712,\n\
\ \"acc_stderr\": 0.01188966924347996\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4326761744966443,\n \"em_stderr\": 0.005073838660621812,\n\
\ \"f1\": 0.5027527265100691,\n \"f1_stderr\": 0.0048086605803724005\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35860500379075055,\n \
\ \"acc_stderr\": 0.013210317364134026\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.829518547750592,\n \"acc_stderr\": 0.010569021122825897\n\
\ }\n}\n```"
repo_url: https://huggingface.co/stabilityai/StableBeluga2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|drop|3_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T10-41-03.838240.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-41-03.838240.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|winogrande|5_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T10-41-03.838240.parquet'
- config_name: results
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- results_2023-10-15T10-41-03.838240.parquet
- split: latest
path:
- results_2023-10-15T10-41-03.838240.parquet
---
# Dataset Card for Evaluation run of stabilityai/StableBeluga2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/stabilityai/StableBeluga2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_stabilityai__StableBeluga2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T10:41:03.838240](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga2/blob/main/results_2023-10-15T10-41-03.838240.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4326761744966443,
"em_stderr": 0.005073838660621812,
"f1": 0.5027527265100691,
"f1_stderr": 0.0048086605803724005,
"acc": 0.5940617757706712,
"acc_stderr": 0.01188966924347996
},
"harness|drop|3": {
"em": 0.4326761744966443,
"em_stderr": 0.005073838660621812,
"f1": 0.5027527265100691,
"f1_stderr": 0.0048086605803724005
},
"harness|gsm8k|5": {
"acc": 0.35860500379075055,
"acc_stderr": 0.013210317364134026
},
"harness|winogrande|5": {
"acc": 0.829518547750592,
"acc_stderr": 0.010569021122825897
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
mrseba/currency_data_project | ---
task_categories:
- feature-extraction
language:
- en
tags:
- 'EUR '
- USD
- UAH
- RUB
- RON
pretty_name: currency
size_categories:
- n<1K
--- |
azhx/counterfact-simple | ---
dataset_info:
features:
- name: subject
dtype: string
- name: proposition
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: case_id
dtype: int64
splits:
- name: train
num_bytes: 12882614.735952066
num_examples: 118363
- name: test
num_bytes: 1431353.264047934
num_examples: 13151
download_size: 5496476
dataset_size: 14313968.0
---
# Dataset Card for "counterfact-simple"
Dataset from [ROME](https://rome.baulab.info/) by Meng et al., simplified to be just prompts, paraphrased prompts, and their true and false targets.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lmms-lab/ai2d | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: image
dtype: image
splits:
- name: test
num_bytes: 537663370.328
num_examples: 3088
download_size: 139466424
dataset_size: 537663370.328
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
@misc{kembhavi2016diagram,
title={A Diagram Is Worth A Dozen Images},
author={Aniruddha Kembhavi and Mike Salvato and Eric Kolve and Minjoon Seo and Hannaneh Hajishirzi and Ali Farhadi},
year={2016},
eprint={1603.07396},
archivePrefix={arXiv},
primaryClass={cs.CV}
} |
FanChen0116/bus_few4_128x | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 1752765
num_examples: 8960
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 0
dataset_size: 1830283
---
# Dataset Card for "bus_few4_128x"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
meerlubna/StateBankPakistanDataset | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 17924
num_examples: 79
download_size: 9894
dataset_size: 17924
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alishaguptavirdi/SocialMedia | ---
license: apache-2.0
---
|
dandrade/es-en | ---
dataset_info:
features:
- name: ES
dtype: string
- name: EN
dtype: string
splits:
- name: train
num_bytes: 1236977.6
num_examples: 3200
- name: test
num_bytes: 309244.4
num_examples: 800
download_size: 931996
dataset_size: 1546222.0
---
# Dataset Card for "es-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PNLPhub/DigiMag | ---
license: apache-2.0
---
|
flinefilms/frannca | ---
license: apache-2.0
---
|
ddahlmeier/sutd_qa_dataset | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 109402.0
num_examples: 221
download_size: 51933
dataset_size: 109402.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
winie521/test | ---
language:
- zh
pretty_name: tes
--- |
zhengzhongliang/SynthCompR | ---
license: cc-by-nc-sa-4.0
---
|
arbml/Ashaar_tafeelah | ---
dataset_info:
features:
- name: sequence
dtype: string
- name: tafeelah
dtype: string
- name: meter
dtype: string
splits:
- name: train
num_bytes: 78684
num_examples: 986
download_size: 18630
dataset_size: 78684
---
# Dataset Card for "Ashaar_tafeelah"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
d0rj/RuBQ_2.0-paragraphs | ---
configs:
- config_name: default
data_files:
- split: paragraphs
path: data/paragraphs-*
dataset_info:
features:
- name: uid
dtype: int64
- name: ru_wiki_pageid
dtype: int64
- name: text
dtype: string
splits:
- name: paragraphs
num_bytes: 47303369
num_examples: 56952
download_size: 24269133
dataset_size: 47303369
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- ru
- en
tags:
- qa
- machine reading
source_datasets:
- original
pretty_name: RuBQ 2.0
size_categories:
- 10K<n<100K
paperswithcode_id: rubq
---
# RuBQ_2.0-paragraphs
## Dataset Description
- **Repository:** https://github.com/vladislavneon/RuBQ/tree/master/RuBQ_2.0
- **Paper:** [RuBQ: A Russian Dataset for Question Answering over Wikidata](https://arxiv.org/abs/2005.10659)
For **test** and **dev** data see [d0rj/RuBQ_2.0](https://huggingface.co/datasets/d0rj/RuBQ_2.0) |
simplisiva/cb65data | ---
license: apache-2.0
---
|
tr416/v2_dataset_20231008_003227 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 75203880.0
num_examples: 29285
- name: test
num_bytes: 760128.0
num_examples: 296
download_size: 12825566
dataset_size: 75964008.0
---
# Dataset Card for "v2_dataset_20231008_003227"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-moral_disputes-neg-answer | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_answer
dtype: string
splits:
- name: test
num_bytes: 126761
num_examples: 346
download_size: 73650
dataset_size: 126761
---
# Dataset Card for "mmlu-moral_disputes-neg-answer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ilhemhmz752/qsttestforllm | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 313392
num_examples: 975
download_size: 45093
dataset_size: 313392
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
microsoft/CLUES | ---
license: mit
---
# CLUES: Few-Shot Learning Evaluation in Natural Language Understanding
This repo contains the data for the NeurIPS 2021 benchmark [Constrained Language Understanding Evaluation Standard (CLUES)](https://openreview.net/pdf?id=VhIIQBm00VI).
## Leaderboard
We maintain a [Leaderboard](https://github.com/microsoft/CLUES) allowing researchers to submit their results as entries.
### Submission Instructions
- Each submission must be submitted as a pull request modifying the markdown file underlying the leaderboard.
- The submission must attach an accompanying public paper and public source code for reproducing their results on our dataset.
- A submission can be toward any subset of tasks in our benchmark, or toward the aggregate leaderboard.
- For any task targeted by the submission, we require evaluation on (1) 10, 20, *and* 30 shots, and (2) all 5 splits of the corresponding dataset and a report of their mean and standard deviation.
- Each leaderboard will be sorted by the 30-shot mean S1 score (where S1 score is a variant of F1 score defined in our paper).
- The submission should not use data from the 4 other splits during few-shot finetuning of any 1 split, either as extra training set or as validation set for hyperparameter tuning.
- However, we allow external data, labeled or unlabeled, to be used for such purposes.
Each submission using external data must mark the corresponding columns "external labeled" and/or "external unlabeled".
Note, in this context, "external data" refers to data used *after pretraining* (e.g., for task-specific tuning); in particular, methods using existing pretrained models only, without extra data, should not mark either column. For obvious reasons, models cannot be trained on the original labeled datasets from where we sampled the few-shot CLUES data.
- In the table entry, the submission should include a method name and a citation, hyperlinking to their publicly released source code reproducing the results. See the last entry of the table below for an example.
### Abbreviations
- FT = (classic) finetuning
- PT = prompt based tuning
- ICL = in-context learning, in the style of GPT-3
- μ±σ = mean μ and standard deviation σ across our 5 splits. Aggregate standard deviation is calculated using the sum-of-variance formula from individual tasks' standard deviations.
### Benchmarking CLUES for Aggregate 30-shot Evaluation
| Shots (K=30) | external labeled | external unlabeled | Average ▼ | SST-2 | MNLI | CoNLL03 | WikiANN | SQuAD-v2 | ReCoRD |
|-----------------------------------------------------------|-------------|---------------|-----------|-----------|----------|----------|----------|----------|----------|
| **Human** | N | N | 81.4 | 83.7 | 69.4 | 87.4 | 82.6 | 73.5 | 91.9 |
| T5-Large-770M-FT | N | N | 43.1±6.7 | 52.3±2.9 | 36.8±3.8 | 51.2±0.1 | 62.4±0.6 | 43.7±2.7 | 12±3.8 |
| BERT-Large-336M-FT | N | N | 42.1±7.8 | 55.4±2.5 | 33.3±1.4 | 51.3±0 | 62.5±0.6 | 35.3±6.4 | 14.9±3.4 |
| BERT-Base-110M-FT | N | N | 41.5±9.2 | 53.6±5.5 | 35.4±3.2 | 51.3±0 | 62.8±0 | 32.6±5.8 | 13.1±3.3 |
| DeBERTa-Large-400M-FT | N | N | 40.1±17.8 | 47.7±9.0 | 26.7±11 | 48.2±2.9 | 58.3±6.2 | 38.7±7.4 | 21.1±3.6 |
| RoBERTa-Large-355M-FT | N | N | 40.0±10.6 | 53.2±5.6 | 34.0±1.1 | 44.7±2.6 | 48.4±6.7 | 43.5±4.4 | 16±2.8 |
| RoBERTa-Large-355M-PT | N | N | | 90.2±1.8 | 61.6±3.5 | | | | |
| DeBERTa-Large-400M-PT | N | N | | 88.4±3.3 | 62.9±3.1 | | | | |
| BERT-Large-336M-PT | N | N | | 82.7±4.1 | 45.3±2.0 | | | | |
| GPT3-175B-ICL | N | N | | 91.0±1.6 | 33.2±0.2 | | | | |
| BERT-Base-110M-PT | N | N | | 79.4±5.6 | 42.5±3.2 | | | | |
| [LiST (Wang et al.)](https://github.com/microsoft/LiST) | N | Y | | 91.3 ±0.7 | 67.9±3.0 | | | | |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 | 0±0 |
### Individual Task Performance over Multiple Shots
#### SST-2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|----------------------------------------|------------------|--------------------|-----------|-----------|----------|------|
| GPT-3 (175B) ICL | N | N | 85.9±3.7 | 92.0±0.7 | 91.0±1.6 | - |
| RoBERTa-Large PT | N | N | 88.8±3.9 | 89.0±1.1 | 90.2±1.8 | 93.8 |
| DeBERTa-Large PT | N | N | 83.4±5.3 | 87.8±3.5 | 88.4±3.3 | 91.9 |
| **Human** | N | N | 79.8 | 83 | 83.7 | - |
| BERT-Large PT | N | N | 63.2±11.3 | 78.2±9.9 | 82.7±4.1 | 91 |
| BERT-Base PT | N | N | 63.9±10.0 | 76.7±6.6 | 79.4±5.6 | 91.9 |
| BERT-Large FT | N | N | 46.3±5.5 | 55.5±3.4 | 55.4±2.5 | 99.1 |
| BERT-Base FT | N | N | 46.2±5.6 | 54.0±2.8 | 53.6±5.5 | 98.1 |
| RoBERTa-Large FT | N | N | 38.4±21.7 | 52.3±5.6 | 53.2±5.6 | 98.6 |
| T5-Large FT | N | N | 51.2±1.8 | 53.4±3.2 | 52.3±2.9 | 97.6 |
| DeBERTa-Large FT | N | N | 43.0±11.9 | 40.8±22.6 | 47.7±9.0 | 100 |
| [Example (lastname et al.)](link2code) | Y/N | Y/N | 0±0 | 0±0 | 0±0 | - |
#### MNLI
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|---------------------------------------------------------|------------------|--------------------|-----------|-----------|-----------|------|
| **Human** | N | Y | 78.1 | 78.6 | 69.4 | - |
| [LiST (wang et al.)](https://github.com/microsoft/LiST) | N | N | 60.5±8.3 | 67.2±4.5 | 67.9±3.0 | - |
| DeBERTa-Large PT | N | N | 44.5±8.2 | 60.7±5.3 | 62.9±3.1 | 88.1 |
| RoBERTa-Large PT | N | N | 57.7±3.6 | 58.6±2.9 | 61.6±3.5 | 87.1 |
| BERT-Large PT | N | N | 41.7±1.0 | 43.7±2.1 | 45.3±2.0 | 81.9 |
| BERT-Base PT | N | N | 40.4±1.8 | 42.1±4.4 | 42.5±3.2 | 81 |
| T5-Large FT | N | N | 39.8±3.3 | 37.9±4.3 | 36.8±3.8 | 85.9 |
| BERT-Base FT | N | N | 37.0±5.2 | 35.2±2.7 | 35.4±3.2 | 81.6 |
| RoBERTa-Large FT | N | N | 34.3±2.8 | 33.4±0.9 | 34.0±1.1 | 85.5 |
| BERT-Large FT | N | N | 33.7±0.4 | 28.2±14.8 | 33.3±1.4 | 80.9 |
| GPT-3 (175B) ICL | N | N | 33.5±0.7 | 33.1±0.3 | 33.2±0.2 | - |
| DeBERTa-Large FT | N | N | 27.4±14.1 | 33.6±2.5 | 26.7±11.0 | 87.6 |
#### CoNLL03
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 87.7 | 89.7 | 87.4 | - |
| BERT-Base FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | - |
| BERT-Large FT | N | N | 51.3±0 | 51.3±0 | 51.3±0 | 89.3 |
| T5-Large FT | N | N | 46.3±6.9 | 50.0±0.7 | 51.2±0.1 | 92.2 |
| DeBERTa-Large FT | N | N | 50.1±1.2 | 47.8±2.5 | 48.2±2.9 | 93.6 |
| RoBERTa-Large FT | N | N | 50.8±0.5 | 44.6±5.1 | 44.7±2.6 | 93.2 |
#### WikiANN
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 81.4 | 83.5 | 82.6 | - |
| BERT-Base FT | N | N | 62.8±0 | 62.8±0 | 62.8±0 | 88.8 |
| BERT-Large FT | N | N | 62.8±0 | 62.6±0.4 | 62.5±0.6 | 91 |
| T5-Large FT | N | N | 61.7±0.7 | 62.1±0.2 | 62.4±0.6 | 87.4 |
| DeBERTa-Large FT | N | N | 58.5±3.3 | 57.9±5.8 | 58.3±6.2 | 91.1 |
| RoBERTa-Large FT | N | N | 58.5±8.8 | 56.9±3.4 | 48.4±6.7 | 91.2 |
#### SQuAD v2
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|-----------|----------|------|
| **Human** | N | N | 71.9 | 76.4 | 73.5 | - |
| T5-Large FT | N | N | 43.6±3.5 | 28.7±13.0 | 43.7±2.7 | 87.2 |
| RoBERTa-Large FT | N | N | 38.1±7.2 | 40.1±6.4 | 43.5±4.4 | 89.4 |
| DeBERTa-Large FT | N | N | 41.4±7.3 | 44.4±4.5 | 38.7±7.4 | 90 |
| BERT-Large FT | N | N | 42.3±5.6 | 35.8±9.7 | 35.3±6.4 | 81.8 |
| BERT-Base FT | N | N | 46.0±2.4 | 34.9±9.0 | 32.6±5.8 | 76.3 |
#### ReCoRD
| Shots (K) | external labeled | external unlabeled | 10 | 20 | 30 ▼ | All |
|------------------|------------------|--------------------|----------|----------|----------|------|
| **Human** | N | N | 94.1 | 94.2 | 91.9 | - |
| DeBERTa-Large FT | N | N | 15.7±5.0 | 16.8±5.7 | 21.1±3.6 | 80.7 |
| RoBERTa-Large FT | N | N | 12.0±1.9 | 9.9±6.2 | 16.0±2.8 | 80.3 |
| BERT-Large FT | N | N | 9.9±5.2 | 11.8±4.9 | 14.9±3.4 | 66 |
| BERT-Base FT | N | N | 10.3±1.8 | 11.7±2.4 | 13.1±3.3 | 54.4 |
| T5-Large FT | N | N | 11.9±2.7 | 11.7±1.5 | 12.0±3.8 | 77.3 |
## How do I cite CLUES?
```
@article{cluesteam2021,
title={Few-Shot Learning Evaluation in Natural Language Understanding},
author={Mukherjee, Subhabrata and Liu, Xiaodong and Zheng, Guoqing and Hosseini, Saghar and Cheng, Hao and Yang, Greg and Meek, Christopher and Awadallah, Ahmed Hassan and Gao, Jianfeng},
booktitle = {NeurIPS 2021},
year = {2021},
month = {December},
url = {https://www.microsoft.com/en-us/research/publication/clues-few-shot-learning-evaluation-in-natural-language-understanding/},
}
```
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
|
P1ot3r/libri-val-en-whisper-small | ---
dataset_info:
features:
- name: input_features
sequence:
sequence: float32
- name: labels
sequence: int64
splits:
- name: validation
num_bytes: 2596418544
num_examples: 2703
download_size: 674059720
dataset_size: 2596418544
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
yunus-emre/arithmetic-tr | ---
dataset_info:
features:
- name: label
dtype: int64
- name: context
dtype: string
- name: completion
dtype: int64
splits:
- name: test
num_bytes: 1178162
num_examples: 20000
download_size: 427337
dataset_size: 1178162
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
peymanatlylu/abus | ---
license: apache-2.0
---
|
autility/ns3456_3451_clf | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 103363480
num_examples: 118557
- name: test
num_bytes: 25883559
num_examples: 29700
download_size: 57747404
dataset_size: 129247039
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v1.0 | ---
pretty_name: Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v1.0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v1.0\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-04T11:44:55.514182](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v1.0/blob/main/results_2024-01-04T11-44-55.514182.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2609421720124211,\n\
\ \"acc_stderr\": 0.03091039790056125,\n \"acc_norm\": 0.26176871498253385,\n\
\ \"acc_norm_stderr\": 0.0316552369448013,\n \"mc1\": 0.23378212974296206,\n\
\ \"mc1_stderr\": 0.014816195991931586,\n \"mc2\": 0.37475758071242915,\n\
\ \"mc2_stderr\": 0.013911882093015021\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.34982935153583616,\n \"acc_stderr\": 0.01393680921215828,\n\
\ \"acc_norm\": 0.3609215017064846,\n \"acc_norm_stderr\": 0.01403476138617546\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4592710615415256,\n\
\ \"acc_stderr\": 0.00497319929633997,\n \"acc_norm\": 0.6110336586337383,\n\
\ \"acc_norm_stderr\": 0.004865193237024058\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.17037037037037037,\n\
\ \"acc_stderr\": 0.032477811859955935,\n \"acc_norm\": 0.17037037037037037,\n\
\ \"acc_norm_stderr\": 0.032477811859955935\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123387,\n\
\ \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123387\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.27547169811320754,\n \"acc_stderr\": 0.02749566368372406,\n\
\ \"acc_norm\": 0.27547169811320754,\n \"acc_norm_stderr\": 0.02749566368372406\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2361111111111111,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.2361111111111111,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.27,\n\
\ \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.1907514450867052,\n\
\ \"acc_stderr\": 0.02995785132986934,\n \"acc_norm\": 0.1907514450867052,\n\
\ \"acc_norm_stderr\": 0.02995785132986934\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.19607843137254902,\n \"acc_stderr\": 0.03950581861179961,\n\
\ \"acc_norm\": 0.19607843137254902,\n \"acc_norm_stderr\": 0.03950581861179961\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n\
\ \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2723404255319149,\n \"acc_stderr\": 0.029101290698386708,\n\
\ \"acc_norm\": 0.2723404255319149,\n \"acc_norm_stderr\": 0.029101290698386708\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n\
\ \"acc_stderr\": 0.039994238792813344,\n \"acc_norm\": 0.23684210526315788,\n\
\ \"acc_norm_stderr\": 0.039994238792813344\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n\
\ \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2857142857142857,\n \"acc_stderr\": 0.023266512213730575,\n \"\
acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.023266512213730575\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23015873015873015,\n\
\ \"acc_stderr\": 0.03764950879790606,\n \"acc_norm\": 0.23015873015873015,\n\
\ \"acc_norm_stderr\": 0.03764950879790606\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.24838709677419354,\n\
\ \"acc_stderr\": 0.024580028921481006,\n \"acc_norm\": 0.24838709677419354,\n\
\ \"acc_norm_stderr\": 0.024580028921481006\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.2512315270935961,\n \"acc_stderr\": 0.030516530732694433,\n\
\ \"acc_norm\": 0.2512315270935961,\n \"acc_norm_stderr\": 0.030516530732694433\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\"\
: 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.24848484848484848,\n \"acc_stderr\": 0.03374402644139405,\n\
\ \"acc_norm\": 0.24848484848484848,\n \"acc_norm_stderr\": 0.03374402644139405\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.22727272727272727,\n \"acc_stderr\": 0.029857515673386407,\n \"\
acc_norm\": 0.22727272727272727,\n \"acc_norm_stderr\": 0.029857515673386407\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.22279792746113988,\n \"acc_stderr\": 0.03003114797764154,\n\
\ \"acc_norm\": 0.22279792746113988,\n \"acc_norm_stderr\": 0.03003114797764154\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2717948717948718,\n \"acc_stderr\": 0.022556551010132354,\n\
\ \"acc_norm\": 0.2717948717948718,\n \"acc_norm_stderr\": 0.022556551010132354\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712177,\n \
\ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712177\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.24369747899159663,\n \"acc_stderr\": 0.027886828078380544,\n\
\ \"acc_norm\": 0.24369747899159663,\n \"acc_norm_stderr\": 0.027886828078380544\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2052980132450331,\n \"acc_stderr\": 0.03297986648473836,\n \"\
acc_norm\": 0.2052980132450331,\n \"acc_norm_stderr\": 0.03297986648473836\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.23853211009174313,\n \"acc_stderr\": 0.01827257581023187,\n \"\
acc_norm\": 0.23853211009174313,\n \"acc_norm_stderr\": 0.01827257581023187\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608043,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608043\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.2320675105485232,\n \"acc_stderr\": 0.02747974455080851,\n\
\ \"acc_norm\": 0.2320675105485232,\n \"acc_norm_stderr\": 0.02747974455080851\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.35874439461883406,\n\
\ \"acc_stderr\": 0.032190792004199956,\n \"acc_norm\": 0.35874439461883406,\n\
\ \"acc_norm_stderr\": 0.032190792004199956\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n\
\ \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.256198347107438,\n \"acc_stderr\": 0.03984979653302871,\n \"acc_norm\"\
: 0.256198347107438,\n \"acc_norm_stderr\": 0.03984979653302871\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.23148148148148148,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.22699386503067484,\n \"acc_stderr\": 0.032910995786157686,\n\
\ \"acc_norm\": 0.22699386503067484,\n \"acc_norm_stderr\": 0.032910995786157686\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n\
\ \"acc_stderr\": 0.04327040932578728,\n \"acc_norm\": 0.29464285714285715,\n\
\ \"acc_norm_stderr\": 0.04327040932578728\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.2524271844660194,\n \"acc_stderr\": 0.04301250399690875,\n\
\ \"acc_norm\": 0.2524271844660194,\n \"acc_norm_stderr\": 0.04301250399690875\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.02934311479809448,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.02934311479809448\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2822477650063857,\n\
\ \"acc_stderr\": 0.01609530296987856,\n \"acc_norm\": 0.2822477650063857,\n\
\ \"acc_norm_stderr\": 0.01609530296987856\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.022698657167855716,\n\
\ \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.022698657167855716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.024630048979824765,\n\
\ \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.024630048979824765\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.26688102893890675,\n\
\ \"acc_stderr\": 0.025122637608816646,\n \"acc_norm\": 0.26688102893890675,\n\
\ \"acc_norm_stderr\": 0.025122637608816646\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.25617283950617287,\n \"acc_stderr\": 0.0242885336377261,\n\
\ \"acc_norm\": 0.25617283950617287,\n \"acc_norm_stderr\": 0.0242885336377261\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.24822695035460993,\n \"acc_stderr\": 0.0257700156442904,\n \
\ \"acc_norm\": 0.24822695035460993,\n \"acc_norm_stderr\": 0.0257700156442904\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2379400260756193,\n\
\ \"acc_stderr\": 0.01087570078769424,\n \"acc_norm\": 0.2379400260756193,\n\
\ \"acc_norm_stderr\": 0.01087570078769424\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.2536764705882353,\n \"acc_stderr\": 0.026431329870789524,\n\
\ \"acc_norm\": 0.2536764705882353,\n \"acc_norm_stderr\": 0.026431329870789524\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2679738562091503,\n \"acc_stderr\": 0.017917974069594722,\n \
\ \"acc_norm\": 0.2679738562091503,\n \"acc_norm_stderr\": 0.017917974069594722\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.3,\n\
\ \"acc_stderr\": 0.04389311454644286,\n \"acc_norm\": 0.3,\n \
\ \"acc_norm_stderr\": 0.04389311454644286\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.14285714285714285,\n \"acc_stderr\": 0.022401787435256386,\n\
\ \"acc_norm\": 0.14285714285714285,\n \"acc_norm_stderr\": 0.022401787435256386\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.030360490154014645,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.030360490154014645\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3313253012048193,\n\
\ \"acc_stderr\": 0.03664314777288087,\n \"acc_norm\": 0.3313253012048193,\n\
\ \"acc_norm_stderr\": 0.03664314777288087\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.30409356725146197,\n \"acc_stderr\": 0.03528211258245231,\n\
\ \"acc_norm\": 0.30409356725146197,\n \"acc_norm_stderr\": 0.03528211258245231\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23378212974296206,\n\
\ \"mc1_stderr\": 0.014816195991931586,\n \"mc2\": 0.37475758071242915,\n\
\ \"mc2_stderr\": 0.013911882093015021\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6124704025256511,\n \"acc_stderr\": 0.013692354636016766\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02350265352539803,\n \
\ \"acc_stderr\": 0.004172883669643949\n }\n}\n```"
repo_url: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|arc:challenge|25_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|arc:challenge|25_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|gsm8k|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|gsm8k|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hellaswag|10_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hellaswag|10_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-39-03.937670.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-44-55.514182.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-04T11-44-55.514182.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- '**/details_harness|winogrande|5_2024-01-04T11-39-03.937670.parquet'
- split: 2024_01_04T11_44_55.514182
path:
- '**/details_harness|winogrande|5_2024-01-04T11-44-55.514182.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-04T11-44-55.514182.parquet'
- config_name: results
data_files:
- split: 2024_01_04T11_39_03.937670
path:
- results_2024-01-04T11-39-03.937670.parquet
- split: 2024_01_04T11_44_55.514182
path:
- results_2024-01-04T11-44-55.514182.parquet
- split: latest
path:
- results_2024-01-04T11-44-55.514182.parquet
---
# Dataset Card for Evaluation run of TinyLlama/TinyLlama-1.1B-Chat-v1.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v1.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-04T11:44:55.514182](https://huggingface.co/datasets/open-llm-leaderboard/details_TinyLlama__TinyLlama-1.1B-Chat-v1.0/blob/main/results_2024-01-04T11-44-55.514182.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2609421720124211,
"acc_stderr": 0.03091039790056125,
"acc_norm": 0.26176871498253385,
"acc_norm_stderr": 0.0316552369448013,
"mc1": 0.23378212974296206,
"mc1_stderr": 0.014816195991931586,
"mc2": 0.37475758071242915,
"mc2_stderr": 0.013911882093015021
},
"harness|arc:challenge|25": {
"acc": 0.34982935153583616,
"acc_stderr": 0.01393680921215828,
"acc_norm": 0.3609215017064846,
"acc_norm_stderr": 0.01403476138617546
},
"harness|hellaswag|10": {
"acc": 0.4592710615415256,
"acc_stderr": 0.00497319929633997,
"acc_norm": 0.6110336586337383,
"acc_norm_stderr": 0.004865193237024058
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.17037037037037037,
"acc_stderr": 0.032477811859955935,
"acc_norm": 0.17037037037037037,
"acc_norm_stderr": 0.032477811859955935
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123387,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123387
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.27547169811320754,
"acc_stderr": 0.02749566368372406,
"acc_norm": 0.27547169811320754,
"acc_norm_stderr": 0.02749566368372406
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2361111111111111,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.2361111111111111,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.1907514450867052,
"acc_stderr": 0.02995785132986934,
"acc_norm": 0.1907514450867052,
"acc_norm_stderr": 0.02995785132986934
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.19607843137254902,
"acc_stderr": 0.03950581861179961,
"acc_norm": 0.19607843137254902,
"acc_norm_stderr": 0.03950581861179961
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2723404255319149,
"acc_stderr": 0.029101290698386708,
"acc_norm": 0.2723404255319149,
"acc_norm_stderr": 0.029101290698386708
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.23684210526315788,
"acc_stderr": 0.039994238792813344,
"acc_norm": 0.23684210526315788,
"acc_norm_stderr": 0.039994238792813344
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.023266512213730575,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.023266512213730575
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.03764950879790606,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.03764950879790606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.24838709677419354,
"acc_stderr": 0.024580028921481006,
"acc_norm": 0.24838709677419354,
"acc_norm_stderr": 0.024580028921481006
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2512315270935961,
"acc_stderr": 0.030516530732694433,
"acc_norm": 0.2512315270935961,
"acc_norm_stderr": 0.030516530732694433
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24848484848484848,
"acc_stderr": 0.03374402644139405,
"acc_norm": 0.24848484848484848,
"acc_norm_stderr": 0.03374402644139405
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.029857515673386407,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.029857515673386407
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22279792746113988,
"acc_stderr": 0.03003114797764154,
"acc_norm": 0.22279792746113988,
"acc_norm_stderr": 0.03003114797764154
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2717948717948718,
"acc_stderr": 0.022556551010132354,
"acc_norm": 0.2717948717948718,
"acc_norm_stderr": 0.022556551010132354
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712177,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712177
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.24369747899159663,
"acc_stderr": 0.027886828078380544,
"acc_norm": 0.24369747899159663,
"acc_norm_stderr": 0.027886828078380544
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2052980132450331,
"acc_stderr": 0.03297986648473836,
"acc_norm": 0.2052980132450331,
"acc_norm_stderr": 0.03297986648473836
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23853211009174313,
"acc_stderr": 0.01827257581023187,
"acc_norm": 0.23853211009174313,
"acc_norm_stderr": 0.01827257581023187
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608043,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2320675105485232,
"acc_stderr": 0.02747974455080851,
"acc_norm": 0.2320675105485232,
"acc_norm_stderr": 0.02747974455080851
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.35874439461883406,
"acc_stderr": 0.032190792004199956,
"acc_norm": 0.35874439461883406,
"acc_norm_stderr": 0.032190792004199956
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.24427480916030533,
"acc_stderr": 0.03768335959728745,
"acc_norm": 0.24427480916030533,
"acc_norm_stderr": 0.03768335959728745
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.256198347107438,
"acc_stderr": 0.03984979653302871,
"acc_norm": 0.256198347107438,
"acc_norm_stderr": 0.03984979653302871
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.22699386503067484,
"acc_stderr": 0.032910995786157686,
"acc_norm": 0.22699386503067484,
"acc_norm_stderr": 0.032910995786157686
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.04327040932578728,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.04327040932578728
},
"harness|hendrycksTest-management|5": {
"acc": 0.2524271844660194,
"acc_stderr": 0.04301250399690875,
"acc_norm": 0.2524271844660194,
"acc_norm_stderr": 0.04301250399690875
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.02934311479809448,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.02934311479809448
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2822477650063857,
"acc_stderr": 0.01609530296987856,
"acc_norm": 0.2822477650063857,
"acc_norm_stderr": 0.01609530296987856
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.23121387283236994,
"acc_stderr": 0.022698657167855716,
"acc_norm": 0.23121387283236994,
"acc_norm_stderr": 0.022698657167855716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.024630048979824765,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.024630048979824765
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.26688102893890675,
"acc_stderr": 0.025122637608816646,
"acc_norm": 0.26688102893890675,
"acc_norm_stderr": 0.025122637608816646
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.25617283950617287,
"acc_stderr": 0.0242885336377261,
"acc_norm": 0.25617283950617287,
"acc_norm_stderr": 0.0242885336377261
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.24822695035460993,
"acc_stderr": 0.0257700156442904,
"acc_norm": 0.24822695035460993,
"acc_norm_stderr": 0.0257700156442904
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2379400260756193,
"acc_stderr": 0.01087570078769424,
"acc_norm": 0.2379400260756193,
"acc_norm_stderr": 0.01087570078769424
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.2536764705882353,
"acc_stderr": 0.026431329870789524,
"acc_norm": 0.2536764705882353,
"acc_norm_stderr": 0.026431329870789524
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2679738562091503,
"acc_stderr": 0.017917974069594722,
"acc_norm": 0.2679738562091503,
"acc_norm_stderr": 0.017917974069594722
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.3,
"acc_stderr": 0.04389311454644286,
"acc_norm": 0.3,
"acc_norm_stderr": 0.04389311454644286
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.14285714285714285,
"acc_stderr": 0.022401787435256386,
"acc_norm": 0.14285714285714285,
"acc_norm_stderr": 0.022401787435256386
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014645,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014645
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3313253012048193,
"acc_stderr": 0.03664314777288087,
"acc_norm": 0.3313253012048193,
"acc_norm_stderr": 0.03664314777288087
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.30409356725146197,
"acc_stderr": 0.03528211258245231,
"acc_norm": 0.30409356725146197,
"acc_norm_stderr": 0.03528211258245231
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23378212974296206,
"mc1_stderr": 0.014816195991931586,
"mc2": 0.37475758071242915,
"mc2_stderr": 0.013911882093015021
},
"harness|winogrande|5": {
"acc": 0.6124704025256511,
"acc_stderr": 0.013692354636016766
},
"harness|gsm8k|5": {
"acc": 0.02350265352539803,
"acc_stderr": 0.004172883669643949
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Brandoko/Instruct-Recharts-v2 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1453192
num_examples: 623
download_size: 409363
dataset_size: 1453192
---
# Dataset Card for "Instruct-Recharts-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChanceFocus/flare-mlesg | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: test
num_bytes: 926136
num_examples: 300
download_size: 228133
dataset_size: 926136
---
# Dataset Card for "flare-mlesg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Christabelle/ai_anime_character_inspo | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 121413764.0
num_examples: 154
download_size: 49099843
dataset_size: 121413764.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
UKPLab/UKP_ASPECT | ---
license: cc-by-nc-3.0
---
# Dataset Card for UKP ASPECT
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998**
- **Paper: https://aclanthology.org/P19-1054/**
- **Leaderboard: n/a**
- **Point of Contact: data\[at\]ukp.informatik.tu-darmstadt.de**
- **(http://www.ukp.tu-darmstadt.de/)**
### Dataset Summary
The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).
If you are having problems with downloading the dataset from the huggingface hub, please download it from [here](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Sentence pair classification
* Topic classification
### Languages
English
## Dataset Structure
### Data Instances
Each instance consists of a topic, a pair of sentences, and an argument similarity label.
```
{"3d printing";"This could greatly increase the quality of life of those currently living in less than ideal conditions.";"The advent and spread of new technologies, like that of 3D printing can transform our lives in many ways.";"DTORCD"}
```
### Data Fields
* topic: the topic keywords used to retrieve the documents
* sentence_1: the first sentence of the pair
* sentence_2: the second sentence of the pair
* label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS)
* Different Topic/Can’t decide (DTORCD): Either one or
both of the sentences belong to a topic different than
the given one, or you can’t understand one or both
sentences. If you choose this option, you need to very
briefly explain, why you chose it (e.g.“The second
sentence is not grammatical”, “The first sentence is
from a different topic” etc.).
* No Similarity (NS): The two arguments belong to the
same topic, but they don’t show any similarity, i.e.
they speak aboutcompletely different aspects of the topic
* Some Similarity (SS): The two arguments belong to the
same topic, showing semantic similarity on a few aspects,
but thecentral message is rather different, or one
argument is way less specific than the other
* High Similarity (HS): The two arguments belong to the
same topic, and they speak about the same aspect, e.g.
using different words
### Data Splits
The dataset currently does not contain standard data splits.
## Dataset Creation
### Curation Rationale
This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering.
### Source Data
#### Initial Data Collection and Normalization
The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText
system (Stab et al., 2018). The ArgumenText
system expects as input an arbitrary topic (query)
and searches a large web crawl for relevant documents.
Finally, it classifies all sentences contained
in the most relevant documents for a given query
into pro, con or non-arguments (with regard to the
given topic).
We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision
approach. For each of our 28 topics, we applied
a sampling strategy that picks randomly two pro
or con argument sentences at random, calculates
their similarity using the system by Misra et al.
(2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity
scale. This was repeated until we reached 3,595
arguments pairs, about 130 pairs for each topic.
#### Who are the source language producers?
Unidentified contributors to the world wide web.
### Annotations
#### Annotation process
The argument pairs were annotated on a range
of three degrees of similarity (no, some, and high
similarity) with the help of crowd workers on
the Amazon Mechanical Turk platform. To account for
unrelated pairs due to the sampling process,
crowd workers could choose a fourth option.
We collected seven assignments per pair
and used Multi-Annotator Competence Estimation
(MACE) with a threshold of 1.0 (Hovy et al.,
2013) to consolidate votes into a gold standard.
#### Who are the annotators?
Crowd workers on Amazon Mechanical Turk
### Personal and Sensitive Information
This dataset is fully anonymized.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("UKPLab/UKP_ASPECT")
```
Please find more information about the code and how the data was collected in the [paper](https://aclanthology.org/P19-1054/).
### Dataset Curators
Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP.
### Licensing Information
[CC-by-NC 3.0](https://creativecommons.org/licenses/by-nc/3.0/)
### Citation Information
Please cite this data using:
```
@inproceedings{reimers2019classification,
title={Classification and Clustering of Arguments with Contextualized Word Embeddings},
author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
pages={567--578},
year={2019}
}
```
### Contributions
Thanks to [@buenalaune](https://github.com/buenalaune) for adding this dataset.
## Tags
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
pretty_name: UKP ASPECT Corpus
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- argument pair
- argument similarity
task_categories:
- text-classification
task_ids:
- topic-classification
- multi-input-text-classification
- semantic-similarity-classification
|
CyberHarem/mainz_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mainz/マインツ/美因茨 (Azur Lane)
This is the dataset of mainz/マインツ/美因茨 (Azur Lane), containing 78 images and their tags.
The core tags of this character are `breasts, long_hair, blue_eyes, white_hair, large_breasts, hairband, bangs, black_hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 78 | 129.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mainz_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 78 | 62.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mainz_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 196 | 137.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mainz_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 78 | 108.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mainz_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 196 | 203.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mainz_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mainz_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, black_gloves, black_skirt, garter_straps, looking_at_viewer, miniskirt, pleated_skirt, solo, white_jacket, white_thighhighs, long_sleeves, red_cape, simple_background, sword, white_background, black_cape, black_footwear, full_body, half_gloves, holding, sheath, standing, belt, cross, skindentation, thigh_strap |
| 1 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, solo, collarbone, cleavage, black_bikini, blush, navel, very_long_hair, see-through, thighs, bare_shoulders, braid, closed_mouth, cowboy_shot, grey_hair, jewelry, parted_lips, shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | black_skirt | garter_straps | looking_at_viewer | miniskirt | pleated_skirt | solo | white_jacket | white_thighhighs | long_sleeves | red_cape | simple_background | sword | white_background | black_cape | black_footwear | full_body | half_gloves | holding | sheath | standing | belt | cross | skindentation | thigh_strap | collarbone | cleavage | black_bikini | blush | navel | very_long_hair | see-through | thighs | bare_shoulders | braid | closed_mouth | cowboy_shot | grey_hair | jewelry | parted_lips | shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:----------------|:--------------------|:------------|:----------------|:-------|:---------------|:-------------------|:---------------|:-----------|:--------------------|:--------|:-------------------|:-------------|:-----------------|:------------|:--------------|:----------|:---------|:-----------|:-------|:--------|:----------------|:--------------|:-------------|:-----------|:---------------|:--------|:--------|:-----------------|:--------------|:---------|:-----------------|:--------|:---------------|:--------------|:------------|:----------|:--------------|:--------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 14 |  |  |  |  |  | X | | | | X | | | X | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
natural-lang-processing/sexismreddit | ---
license: unknown
language:
- en
tags:
- code
pretty_name: data-nlp
--- |
open-llm-leaderboard/details_Fredithefish__ReasonixPajama-3B-HF | ---
pretty_name: Evaluation run of Fredithefish/ReasonixPajama-3B-HF
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Fredithefish/ReasonixPajama-3B-HF](https://huggingface.co/Fredithefish/ReasonixPajama-3B-HF)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Fredithefish__ReasonixPajama-3B-HF\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T20:47:42.602044](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__ReasonixPajama-3B-HF/blob/main/results_2023-10-17T20-47-42.602044.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.005557885906040268,\n\
\ \"em_stderr\": 0.0007613497667018498,\n \"f1\": 0.08515520134228192,\n\
\ \"f1_stderr\": 0.001865179611495464,\n \"acc\": 0.3211223493917147,\n\
\ \"acc_stderr\": 0.007758248793713638\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.005557885906040268,\n \"em_stderr\": 0.0007613497667018498,\n\
\ \"f1\": 0.08515520134228192,\n \"f1_stderr\": 0.001865179611495464\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \
\ \"acc_stderr\": 0.002001305720948056\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6369376479873717,\n \"acc_stderr\": 0.01351519186647922\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Fredithefish/ReasonixPajama-3B-HF
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|arc:challenge|25_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T20_47_42.602044
path:
- '**/details_harness|drop|3_2023-10-17T20-47-42.602044.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T20-47-42.602044.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T20_47_42.602044
path:
- '**/details_harness|gsm8k|5_2023-10-17T20-47-42.602044.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T20-47-42.602044.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hellaswag|10_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:18:48.992858.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T15:18:48.992858.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T15:18:48.992858.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T20_47_42.602044
path:
- '**/details_harness|winogrande|5_2023-10-17T20-47-42.602044.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T20-47-42.602044.parquet'
- config_name: results
data_files:
- split: 2023_08_17T15_18_48.992858
path:
- results_2023-08-17T15:18:48.992858.parquet
- split: 2023_10_17T20_47_42.602044
path:
- results_2023-10-17T20-47-42.602044.parquet
- split: latest
path:
- results_2023-10-17T20-47-42.602044.parquet
---
# Dataset Card for Evaluation run of Fredithefish/ReasonixPajama-3B-HF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Fredithefish/ReasonixPajama-3B-HF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Fredithefish/ReasonixPajama-3B-HF](https://huggingface.co/Fredithefish/ReasonixPajama-3B-HF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Fredithefish__ReasonixPajama-3B-HF",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T20:47:42.602044](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__ReasonixPajama-3B-HF/blob/main/results_2023-10-17T20-47-42.602044.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.005557885906040268,
"em_stderr": 0.0007613497667018498,
"f1": 0.08515520134228192,
"f1_stderr": 0.001865179611495464,
"acc": 0.3211223493917147,
"acc_stderr": 0.007758248793713638
},
"harness|drop|3": {
"em": 0.005557885906040268,
"em_stderr": 0.0007613497667018498,
"f1": 0.08515520134228192,
"f1_stderr": 0.001865179611495464
},
"harness|gsm8k|5": {
"acc": 0.00530705079605762,
"acc_stderr": 0.002001305720948056
},
"harness|winogrande|5": {
"acc": 0.6369376479873717,
"acc_stderr": 0.01351519186647922
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
FaalSa/data1 | ---
dataset_info:
features:
- name: start
dtype: timestamp[s]
- name: target
sequence: float32
- name: item_id
dtype: string
- name: feat_static_cat
sequence: uint64
splits:
- name: train
num_bytes: 17309
num_examples: 1
- name: validation
num_bytes: 17789
num_examples: 1
- name: test
num_bytes: 18269
num_examples: 1
download_size: 41079
dataset_size: 53367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
hlt-lab/personachatsample-jumble | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
- name: reference
dtype: string
splits:
- name: train
num_bytes: 36304
num_examples: 100
download_size: 28190
dataset_size: 36304
---
# Dataset Card for "personachatsample-jumble"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/jah-khalib | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/jah-khalib"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.269094 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0fed863398263b7dc223768818883d19.300x300x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/jah-khalib">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jah Khalib</div>
<a href="https://genius.com/artists/jah-khalib">
<div style="text-align: center; font-size: 14px;">@jah-khalib</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/jah-khalib).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/jah-khalib")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|84| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/jah-khalib")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
presencesw/wmt16_ro_en | ---
dataset_info:
features:
- name: en
dtype: string
- name: ro
dtype: string
splits:
- name: train
num_bytes: 188287715
num_examples: 610320
- name: validation
num_bytes: 561791
num_examples: 1999
- name: test
num_bytes: 539208
num_examples: 1999
download_size: 124524306
dataset_size: 189388714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
irds/neumarco_ru_train | ---
pretty_name: '`neumarco/ru/train`'
viewer: false
source_datasets: ['irds/neumarco_ru']
task_categories:
- text-retrieval
---
# Dataset Card for `neumarco/ru/train`
The `neumarco/ru/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/neumarco#neumarco/ru/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=808,731
- `qrels`: (relevance assessments); count=532,761
- `docpairs`; count=269,919,004
- For `docs`, use [`irds/neumarco_ru`](https://huggingface.co/datasets/irds/neumarco_ru)
This dataset is used by: [`neumarco_ru_train_judged`](https://huggingface.co/datasets/irds/neumarco_ru_train_judged)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/neumarco_ru_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/neumarco_ru_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
docpairs = load_dataset('irds/neumarco_ru_train', 'docpairs')
for record in docpairs:
record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
mumimumi/mumimodel_jpg | ---
license: unknown
---
|
adenp/demo-data | ---
license: other
---
|
prasanthyss/labeled_tulu2 | ---
license: apache-2.0
--- |
AdapterOcean/dollyaug-standardized_cluster_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 10050897
num_examples: 994
download_size: 3137657
dataset_size: 10050897
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dollyaug-standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jim14/guj_data | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1901
num_examples: 9
download_size: 3200
dataset_size: 1901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
DBQ/Saint.Laurent.Product.prices.France | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: France - Saint Laurent - Product-level price list
tags:
- webscraping
- ecommerce
- Saint Laurent
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 1240501
num_examples: 3064
download_size: 377349
dataset_size: 1240501
---
# Saint Laurent web scraped data
## About the website
The **Fashion** and **Luxury** retail industry is a prominent economic sector in the EMEA, particularly in **France**, known globally as the birthplace of Haute Couture. Anchored by prestigious French fashion houses like **Saint Laurent**, France is a cornerstone in this industry. Its influence extends from high-end fashion districts in Paris to worldwide through **Ecommerce**. The observed data set specifically provides **Ecommerce product-list page (PLP) data** on Saint Laurents operations in France. This provides valuable insights into market trends, consumer preferences, and competitive landscape, all essential factors in steering brand strategies and maintaining market relevance in the dynamic world of luxury fashion.
## Link to **dataset**
[France - Saint Laurent - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Saint%20Laurent%20Product-prices%20France/r/rec6dHiH2JbY9XQx5)
|
MBZUAI/multilingual-llava-bench-in-the-wild | ---
license: cc-by-4.0
---
# 🌍 PALO: A Polyglot Large Multimodal Model for 5B People
Vision-language conversation in English, Chinese, French, Spanish, Russian, Japanese, Arabic, Hindi, Bengali and Urdu.
[](https://arxiv.org/abs/2402.14818)
[](https://github.com/mbzuai-oryx/PALO)
[](https://palo.mbzuai-oryx.ngrok.app)
## Multi-lingual Evaluation Dataset
This repository contains LLaVA Bench In-the-Wild, translated to Chinese, French, Spanish, Russian, Japanese, Arabic, Hindi, Bengali, and Urdu.
Please refer to our [paper](https://arxiv.org/abs/2402.14818) for details. |
ruanchaves/hashset_distant_sampled | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- hi
- en
license:
- unknown
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: HashSet Distant Sampled
tags:
- word-segmentation
---
# Dataset Card for HashSet Distant Sampled
## Dataset Description
- **Repository:** [prashantkodali/HashSet](https://github.com/prashantkodali/HashSet)
- **Paper:** [HashSet -- A Dataset For Hashtag Segmentation](https://arxiv.org/abs/2201.06741)
### Dataset Summary
Hashset is a new dataset consisting on 1.9k manually annotated and 3.3M loosely supervised tweets for testing the
efficiency of hashtag segmentation models. We compare State of The Art Hashtag Segmentation models on Hashset and other
baseline datasets (STAN and BOUN). We compare and analyse the results across the datasets to argue that HashSet can act
as a good benchmark for hashtag segmentation tasks.
HashSet Distant: 3.3M loosely collected camel cased hashtags containing hashtag and their segmentation.
HashSet Distant Sampled is a sample of 20,000 camel cased hashtags from the HashSet Distant dataset.
### Languages
Hindi and English.
## Dataset Structure
### Data Instances
```
{
'index': 282559,
'hashtag': 'Youth4Nation',
'segmentation': 'Youth 4 Nation'
}
```
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{kodali2022hashset,
title={HashSet--A Dataset For Hashtag Segmentation},
author={Kodali, Prashant and Bhatnagar, Akshala and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam},
journal={arXiv preprint arXiv:2201.06741},
year={2022}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
AlShurbaji/PIDray_Tensors | ---
license: apache-2.0
---
PIDray - 100 Tensors with their annotations |
bcui19/OpenHermes-2.5-llama-format | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 1615793279
num_examples: 1008268
download_size: 0
dataset_size: 1615793279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "OpenHermes-2.5-llama-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/lmind_nq_train10000_eval6489_v1_qa | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 1159729
num_examples: 10000
- name: train_recite_qa
num_bytes: 7573876
num_examples: 10000
- name: eval_qa
num_bytes: 752802
num_examples: 6489
- name: eval_recite_qa
num_bytes: 4912675
num_examples: 6489
- name: all_docs
num_bytes: 9144930
num_examples: 14014
- name: all_docs_eval
num_bytes: 9144126
num_examples: 14014
- name: train
num_bytes: 1159729
num_examples: 10000
- name: validation
num_bytes: 752802
num_examples: 6489
download_size: 21497845
dataset_size: 34600669
---
# Dataset Card for "lmind_nq_train10000_eval6489_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yousaforever/yousa_data_1 | ---
license: gpl-3.0
---
大约9min的正常说话声音,划分为70个切片,可用于训练tts模型。
A 9-minute normal speaking voice divided into 70 slices for training a TTS model. |
antonio1206/hactiv_8 | ---
license: apache-2.0
---
|
JordanYussac/customer_service_chatbot_trial | ---
dataset_info:
features:
- name: issue_area
dtype: string
- name: issue_category
dtype: string
- name: issue_sub_category
dtype: string
- name: issue_category_sub_category
dtype: string
- name: customer_sentiment
dtype: string
- name: product_category
dtype: string
- name: product_sub_category
dtype: string
- name: issue_complexity
dtype: string
- name: agent_experience_level
dtype: string
- name: agent_experience_level_desc
dtype: string
- name: conversation
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2541279
num_examples: 1000
download_size: 826015
dataset_size: 2541279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hyokwan/llama2_hkcode | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sartajekram/BanglaRQA | ---
annotations_creators:
- human
license: cc-by-nc-sa-4.0
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
language:
- bn
size_categories:
- 10K<n<100K
---
# Dataset Card for `BanglaRQA`
## Table of Contents
- [Dataset Card for `BanglaRQA`](#dataset-card-for-BanglaRQA)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Usage](#usage)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [https://github.com/sartajekram419/BanglaRQA](https://github.com/sartajekram419/BanglaRQA)
- **Paper:** [BanglaRQA: A Benchmark Dataset for Under-resourced Bangla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types](https://aclanthology.org/2022.findings-emnlp.186)
### Dataset Summary
This is a human-annotated Bangla Question Answering (QA) dataset with diverse question-answer types.
### Languages
* `Bangla`
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("sartajekram/BanglaRQA")
```
## Dataset Structure
### Data Instances
One example from the dataset is given below in JSON format.
```
{
'passage_id': 'bn_wiki_2977',
'title': 'ফাজিল পরীক্ষা',
'context': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা। ফাজিল পরীক্ষা বাংলাদেশে ডিগ্রি সমমানের, কখনো স্নাতক সমমানের একটি পরীক্ষা, যা একটি ফাজিল মাদ্রাসায় অনুষ্ঠিত হয়ে থাকে। তবে ভারতে ফাজিল পরীক্ষাকে উচ্চ মাধ্যমিক শ্রেণীর (১১ বা ১২ ক্লাস) মান বলে বিবেচিত করা হয়। ফাজিল পরীক্ষা বাংলাদেশ ভারত ও পাকিস্তানের সরকারি স্বীকৃত আলিয়া মাদরাসায় প্রচলিত রয়েছে। বাংলাদেশের ফাজিল পরীক্ষা ইসলামি আরবি বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়ে থাকে ও ভারতের ফাজিল পরীক্ষা পশ্চিমবঙ্গ মাদ্রাসা শিক্ষা পর্ষদের অধীনে অনুষ্ঠিত হয়ে থাকে।\n\n১৯৪৭ সালে ঢাকা আলিয়া মাদ্রাসা ঢাকায় স্থানান্তরের পূর্বে বাংলাদেশ ও ভারতের ফাজিল পরীক্ষা কলকাতা আলিয়া মাদ্রাসার অধীনে অনুষ্ঠিত হতো। ফাযিল পরীক্ষা বর্তমানে ইসলামি আরবী বিশ্ববিদ্যালয়ের অধীনে অনুষ্ঠিত হয়। যা পূর্বে মাদরাসা বোর্ড ও ইসলামি বিশ্ববিদ্যালয়ের আধীনে অনুষ্ঠিত হত। মাদ্রাসা-ই-আলিয়া ঢাকায় স্থানান্তরিত হলে ১৯৪৮ সালে মাদ্রাসা বোর্ডের ফাজিলগুলো পরীক্ষা ঢাকা বিশ্ববিদ্যালয় কর্তৃক গৃহীত হতো। ১৯৭৫ সালের কুদরত-এ-খুদা শিক্ষা কমিশনের সুপারিশে মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসাসমূহে জাতীয় শিক্ষাক্রম ও বহুমুখী পাঠ্যসূচি প্রবর্তিত করা হয়। ১৯৮০ সালে অনুষ্ঠিত ফাজিল পরীক্ষায় এই পাঠ্যসুচী কার্যকর হয়। এই শিক্ষা কমিশন অনুসারে ফাজিল শ্রেণীতে ইসলামি শিক্ষার পাশাপাশি সাধারণ পাঠ্যসূচী অন্তর্ভুক্ত করে ফাজিল পরীক্ষাকে সাধারণ উচ্চ মাধ্যমিক এইচ এস সির সমমান ঘোষণা করা হয়।\n\n১৯৭৮ সালে অধ্যাপক মুস্তফা বিন কাসিমের নেতৃত্বে সিনিয়র মাদ্রাসা শিক্ষা ব্যবস্থা কমিটি গঠিত হয়। এই কমিটির নির্দেশনায় ১৯৮৪ সালে সাধারণ শিক্ষার স্তরের সঙ্গে বাংলাদেশ মাদ্রাসা বোর্ড নিয়ন্ত্রিত আলিয়া মাদ্রাসা শিক্ষা স্তরের সামঞ্জস্য করা হয়। ফাজিল স্তরকে ২ বছর মেয়াদী কোর্সে উন্নিত করে, মোট ১৬ বছর ব্যাপী আলিয়া মাদ্রাসার পূর্ণাঙ্গ আধুনিক শিক্ষা ব্যবস্থা প্রবর্তন করা হয়। এই কমিশনের মাধ্যমেই সরকার ফাজিল পরীক্ষাকে সাধারণ ডিগ্রি মান ঘোষণা করে।',
'question_id': 'bn_wiki_2977_01',
'question_text': 'ফাজিল পরীক্ষা বাংলাদেশ ও ভারতের আলিয়া মাদ্রাসায় অনুষ্ঠিত একটি সরকারি পরীক্ষা ?',
'is_answerable': '1',
'question_type': 'confirmation',
'answers':
{
'answer_text': ['হ্যাঁ', 'হ্যাঁ '],
'answer_type': ['yes/no', 'yes/no']
},
}
```
### Data Splits
| split |count |
|----------|--------|
|`train`| 11,912 |
|`validation`| 1,484 |
|`test`| 1,493 |
## Additional Information
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use the dataset, please cite the following paper:
```
@inproceedings{ekram-etal-2022-banglarqa,
title = "{B}angla{RQA}: A Benchmark Dataset for Under-resourced {B}angla Language Reading Comprehension-based Question Answering with Diverse Question-Answer Types",
author = "Ekram, Syed Mohammed Sartaj and
Rahman, Adham Arik and
Altaf, Md. Sajid and
Islam, Mohammed Saidul and
Rahman, Mehrab Mustafy and
Rahman, Md Mezbaur and
Hossain, Md Azam and
Kamal, Abu Raihan Mostofa",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.186",
pages = "2518--2532",
abstract = "High-resource languages, such as English, have access to a plethora of datasets with various question-answer types resembling real-world reading comprehension. However, there is a severe lack of diverse and comprehensive question-answering datasets in under-resourced languages like Bangla. The ones available are either translated versions of English datasets with a niche answer format or created by human annotations focusing on a specific domain, question type, or answer type. To address these limitations, this paper introduces BanglaRQA, a reading comprehension-based Bangla question-answering dataset with various question-answer types. BanglaRQA consists of 3,000 context passages and 14,889 question-answer pairs created from those passages. The dataset comprises answerable and unanswerable questions covering four unique categories of questions and three types of answers. In addition, this paper also implemented four different Transformer models for question-answering on the proposed dataset. The best-performing model achieved an overall 62.42{\%} EM and 78.11{\%} F1 score. However, detailed analyses showed that the performance varies across question-answer types, leaving room for substantial improvement of the model performance. Furthermore, we demonstrated the effectiveness of BanglaRQA as a training resource by showing strong results on the bn{\_}squad dataset. Therefore, BanglaRQA has the potential to contribute to the advancement of future research by enhancing the capability of language models. The dataset and codes are available at https://github.com/sartajekram419/BanglaRQA",
}
```
|
laion/laion2B-multi-joined-translated-to-en | Invalid username or password. |
alpayariyak/unnatural-instructions_standardized | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 99089043
num_examples: 722010
download_size: 23436478
dataset_size: 99089043
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "unnatural-instructions_standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MadElf1337/Pneumonia_Images | ---
license: apache-2.0
---
|
HydraLM/CoT-Collection-standardized | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 2149718484
num_examples: 3675842
download_size: 1206341432
dataset_size: 2149718484
---
# Dataset Card for "CoT-Collection-standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mikolaj-p/MOCKS-test | ---
license: cc-by-4.0
---
|
Gummybear05/speed_changed_w | ---
dataset_info:
features:
- name: path
dtype: string
- name: filename
dtype: string
- name: text
dtype: string
- name: quality
dtype: string
- name: city
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: array
sequence: float64
- name: audio
dtype: string
- name: sample_rate
dtype: int64
splits:
- name: train
num_bytes: 9618932258
num_examples: 8531
- name: test
num_bytes: 258525111
num_examples: 120
download_size: 2030595819
dataset_size: 9877457369
---
# Dataset Card for "speed_changed_w"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CronosGhost/cpp-code-reranking | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 23231663.1
num_examples: 9900
- name: test
num_bytes: 2581295.9
num_examples: 1100
download_size: 10424834
dataset_size: 25812959.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
speed1/arena | ---
license: openrail
---
|
CyberHarem/bismarck_zwei_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of bismarck_zwei/ビスマルクZwei/俾斯麦Zwei (Azur Lane)
This is the dataset of bismarck_zwei/ビスマルクZwei/俾斯麦Zwei (Azur Lane), containing 52 images and their tags.
The core tags of this character are `blonde_hair, blue_eyes, long_hair, breasts, large_breasts, hair_between_eyes, bangs, very_long_hair, eyewear_on_head, sunglasses, hat, peaked_cap`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 52 | 98.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_zwei_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 52 | 46.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_zwei_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 134 | 98.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_zwei_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 52 | 81.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_zwei_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 134 | 152.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/bismarck_zwei_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/bismarck_zwei_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 |  |  |  |  |  | looking_at_viewer, 1girl, solo, cleavage, black_one-piece_swimsuit, thighs, highleg, blush, necklace, ponytail, strapless, water, wet, bare_shoulders, closed_mouth, see-through |
| 1 | 24 |  |  |  |  |  | 1girl, solo, military_uniform, looking_at_viewer, black_gloves, black_headwear, sideboob, military_hat, fur-trimmed_cape, simple_background, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | solo | cleavage | black_one-piece_swimsuit | thighs | highleg | blush | necklace | ponytail | strapless | water | wet | bare_shoulders | closed_mouth | see-through | military_uniform | black_gloves | black_headwear | sideboob | military_hat | fur-trimmed_cape | simple_background | thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:-------|:-----------|:---------------------------|:---------|:----------|:--------|:-----------|:-----------|:------------|:--------|:------|:-----------------|:---------------|:--------------|:-------------------|:---------------|:-----------------|:-----------|:---------------|:-------------------|:--------------------|:-------------|
| 0 | 26 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 1 | 24 |  |  |  |  |  | X | X | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
E-EVAL/E-EVAL | ---
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- multiple-choice
language:
- zh
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.