datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
zuleo/princess-jai-lee | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- embedding
- text-to-image
- image-to-image
- art
- artistic
---
# Princess Jai Lee Embedding
Fine-tuned textual inversion based on a character from [3ee Games](https://3ee.com), Princess Jai Lee.

## Embedding Usage
Use the token ```jaileefunkprincess```
All sample images also use the bad prompt embedding: https://huggingface.co/datasets/Nerfgun3/bad_prompt#version-2
---
☕ If you enjoy this model, buy me a coffee [](https://ko-fi.com/3eegames)
---
## 🧾 Prompt example:
**The queen has returned**
```Perfectly-centered close up portrait of a real life godly woman (jaileefunkprincess :1.1)with long purple hair and wearing shining armor descending from heaven, lifelike, super highly detailed, professional digital painting, artstation, concept art, Unreal Engine 5, Photorealism, HD quality, 8k resolution, cinema 4d, 3D, beautiful, cinematic, art by artgerm and greg rutkowski and alphonse mucha and loish and WLOP, dynamic pose```
Negative prompt:
```(bad_prompt_version2:0.8), lowres, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, ((ugly)), ((duplicate)), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))```
_Steps: 80, Sampler: DPM adaptive, CFG scale: 10.5, Seed: 945244310, Size: 512x512, Model hash: d0b457ae_ (Model hash: protogen-x53-photorealism-official-release - https://civitai.com/models/3816/protogen-x53-photorealism-official-release)
---
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
clarin-knext/scidocs-pl | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
C-MTEB/CmedqaRetrieval | ---
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
- split: queries
path: data/queries-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 84962605
num_examples: 100001
- name: queries
num_bytes: 728106
num_examples: 3999
download_size: 61319407
dataset_size: 85690711
---
# Dataset Card for "CmedqaRetrieval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SEACrowd/xpersona_id | ---
tags:
- dialogue-system
language:
- ind
---
# xpersona_id
XPersona is a multi-lingual extension of Persona-Chat.
XPersona dataset includes persona conversations in six different languages other than English for building and evaluating multilingual personalized agents.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{lin2020xpersona,
title={XPersona: Evaluating multilingual personalized chatbot},
author={Lin, Zhaojiang and Liu, Zihan and Winata, Genta Indra and Cahyawijaya, Samuel and Madotto, Andrea and Bang, Yejin and Ishii, Etsuko and Fung, Pascale},
journal={arXiv preprint arXiv:2003.07568},
year={2020}
}
@inproceedings{cahyawijaya-etal-2021-indonlg,
title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
author = "Cahyawijaya, Samuel and
Winata, Genta Indra and
Wilie, Bryan and
Vincentio, Karissa and
Li, Xiaohong and
Kuncoro, Adhiguna and
Ruder, Sebastian and
Lim, Zhi Yuan and
Bahar, Syafri and
Khodra, Masayu and
Purwarianti, Ayu and
Fung, Pascale",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.699",
doi = "10.18653/v1/2021.emnlp-main.699",
pages = "8875--8898"
}
```
## License
CC-BY-SA 4.0
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
iulusoy/test-data | ---
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: mytest
size_categories:
- n<1K
--- |
CyberHarem/washington_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of washington (Kantai Collection)
This is the dataset of washington (Kantai Collection), containing 234 images and their tags.
The core tags of this character are `long_hair, grey_hair, ahoge, breasts, grey_eyes, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 234 | 275.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/washington_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 234 | 176.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/washington_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 546 | 354.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/washington_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 234 | 251.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/washington_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 546 | 472.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/washington_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/washington_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, blue_hairband, official_alternate_costume, solo, blush, day, cowboy_shot, ocean, outdoors, blue_one-piece_swimsuit, blue_sky, cloud, hair_flower, sarong, looking_at_viewer, smile, casual_one-piece_swimsuit |
| 1 | 13 |  |  |  |  |  | 1girl, solo, blue_bikini, blue_hairband, simple_background, official_alternate_costume, white_background, looking_at_viewer, hair_flower, navel, blush, upper_body |
| 2 | 32 |  |  |  |  |  | 1girl, blue_necktie, sleeveless_shirt, solo, white_shirt, military_uniform, simple_background, headgear, looking_at_viewer, white_background, pleated_skirt, off_shoulder, bare_shoulders, black_pantyhose, white_skirt, cowboy_shot, closed_mouth |
| 3 | 16 |  |  |  |  |  | rabbit_ears, detached_collar, fake_animal_ears, playboy_bunny, 1girl, blue_necktie, simple_background, solo, strapless_leotard, white_background, looking_at_viewer, wrist_cuffs, black_pantyhose, cowboy_shot, cleavage, white_leotard, necktie_between_breasts, rabbit_tail, thighband_pantyhose |
| 4 | 13 |  |  |  |  |  | 1girl, solo, off-shoulder_sweater, blush, simple_background, white_background, long_sleeves, necklace, official_alternate_costume, looking_at_viewer, pink_skirt, pleated_skirt, white_pantyhose, cowboy_shot, smile |
| 5 | 5 |  |  |  |  |  | 1girl, cleavage, navel, official_alternate_costume, race_queen, solo, miniskirt, blue_choker, blue_eyes, blue_skirt, closed_mouth, holding_umbrella, midriff, simple_background, white_hair, black_skirt, blue_thighhighs, blush, cowboy_shot, crop_top, cropped_jacket, fingerless_gloves, full_body, hair_between_eyes, hand_on_hip, mismatched_legwear, multicolored_clothes, standing, two-tone_skirt, underboob, white_background, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_hairband | official_alternate_costume | solo | blush | day | cowboy_shot | ocean | outdoors | blue_one-piece_swimsuit | blue_sky | cloud | hair_flower | sarong | looking_at_viewer | smile | casual_one-piece_swimsuit | blue_bikini | simple_background | white_background | navel | upper_body | blue_necktie | sleeveless_shirt | white_shirt | military_uniform | headgear | pleated_skirt | off_shoulder | bare_shoulders | black_pantyhose | white_skirt | closed_mouth | rabbit_ears | detached_collar | fake_animal_ears | playboy_bunny | strapless_leotard | wrist_cuffs | cleavage | white_leotard | necktie_between_breasts | rabbit_tail | thighband_pantyhose | off-shoulder_sweater | long_sleeves | necklace | pink_skirt | white_pantyhose | race_queen | miniskirt | blue_choker | blue_eyes | blue_skirt | holding_umbrella | midriff | white_hair | black_skirt | blue_thighhighs | crop_top | cropped_jacket | fingerless_gloves | full_body | hair_between_eyes | hand_on_hip | mismatched_legwear | multicolored_clothes | standing | two-tone_skirt | underboob | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:-----------------------------|:-------|:--------|:------|:--------------|:--------|:-----------|:--------------------------|:-----------|:--------|:--------------|:---------|:--------------------|:--------|:----------------------------|:--------------|:--------------------|:-------------------|:--------|:-------------|:---------------|:-------------------|:--------------|:-------------------|:-----------|:----------------|:---------------|:-----------------|:------------------|:--------------|:---------------|:--------------|:------------------|:-------------------|:----------------|:--------------------|:--------------|:-----------|:----------------|:--------------------------|:--------------|:----------------------|:-----------------------|:---------------|:-----------|:-------------|:------------------|:-------------|:------------|:--------------|:------------|:-------------|:-------------------|:----------|:-------------|:--------------|:------------------|:-----------|:-----------------|:--------------------|:------------|:--------------------|:--------------|:---------------------|:-----------------------|:-----------|:-----------------|:------------|:-------------------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 13 |  |  |  |  |  | X | X | X | X | X | | | | | | | | X | | X | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 32 |  |  |  |  |  | X | | | X | | | X | | | | | | | | X | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 16 |  |  |  |  |  | X | | | X | | | X | | | | | | | | X | | | | X | X | | | X | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 13 |  |  |  |  |  | X | | X | X | X | | X | | | | | | | | X | X | | | X | X | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | | X | X | X | | X | | | | | | | | | | | | X | X | X | | | | | | | | | | | | X | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
dim/mt_bench_en | ---
license: mit
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turns
sequence: string
splits:
- name: train
num_bytes: 34899
num_examples: 80
download_size: 24635
dataset_size: 34899
---
Original Source https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/data/mt_bench/question.jsonl
|
emozilla/booksum-summary-analysis_llama-16384 | ---
dataset_info:
features:
- name: chapter
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 210534702.2666892
num_examples: 11808
- name: validation
num_bytes: 43846669.0
num_examples: 2234
- name: test
num_bytes: 27106410.273220748
num_examples: 1657
download_size: 134314056
dataset_size: 281487781.53990996
---
# Dataset Card for "booksum-summary-analysis_llama-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
INSUNN/med-records-zh | ---
dataset_info:
features:
- name: context
dtype: string
- name: answers
dtype: string
- name: Q
dtype: string
- name: A
dtype: string
splits:
- name: train
num_bytes: 9478308
num_examples: 2031
download_size: 5018444
dataset_size: 9478308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sanjin7/copy_dataset_untrimmed | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 28610253
num_examples: 84352
download_size: 0
dataset_size: 28610253
---
# Dataset Card for "copy_dataset_untrimmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gimmaru/hellaswag | ---
dataset_info:
features:
- name: ind
dtype: int32
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 1119578
num_examples: 1000
download_size: 0
dataset_size: 1119578
---
# Dataset Card for "hellaswag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Note: This dataset was utilized for the evaluation of probability-based prompt selection techniques in the paper '[Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis](https://arxiv.org/abs/2305.14877)'. It differs from the actual benchmark dataset. |
dsrestrepo/Embeddings_cxr | ---
dataset_info:
features:
- name: path
dtype: string
- name: race_label
dtype: int64
- name: sex_label
dtype: int64
- name: disease_label
dtype: int64
- name: subject_id
dtype: int64
- name: study_id
dtype: int64
- name: split
dtype: string
- name: file_path
dtype: string
- name: image_id
dtype: string
- name: embeddings
dtype: string
splits:
- name: train
num_bytes: 14145391594
num_examples: 153128
download_size: 9302270600
dataset_size: 14145391594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AdapterOcean/med_alpaca_standardized_cluster_86_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8361587
num_examples: 5265
download_size: 4411845
dataset_size: 8361587
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_86_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
peldrak/coastal3 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 442266694.208
num_examples: 1296
- name: test
num_bytes: 147937358.0
num_examples: 370
download_size: 611506244
dataset_size: 590204052.208
---
# Dataset Card for "coastal3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joey234/mmlu-human_aging-verbal-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: neg_prompt
dtype: string
splits:
- name: test
num_bytes: 73308
num_examples: 223
download_size: 46912
dataset_size: 73308
---
# Dataset Card for "mmlu-human_aging-verbal-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FINNUMBER/FINCH_TRAIN_QA_EQA_400 | ---
dataset_info:
features:
- name: task
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2203508
num_examples: 400
download_size: 1181181
dataset_size: 2203508
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sushvij/generativeaisample3 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 52471.0
num_examples: 7
download_size: 53834
dataset_size: 52471.0
---
# Dataset Card for "generativeaisample3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chavanarvind/faces | ---
license: apache-2.0
---
|
adalib/marvin-data | ---
dataset_info:
features:
- name: code
dtype: string
- name: apis
sequence: string
- name: extract_api
dtype: string
splits:
- name: train
num_bytes: 8643783
num_examples: 183
- name: test
num_bytes: 649382
num_examples: 35
download_size: 2152040
dataset_size: 9293165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
AI-Secure/DecodingTrust | ---
license: cc-by-sa-4.0
task_categories:
- text-classification
- question-answering
- text-generation
- text2text-generation
language:
- en
pretty_name: DecodingTrust
size_categories:
- 10K<n<100K
arixv: 2306.11698
configs:
- config_name: toxicity
data_files:
- split: realtoxicityprompts.nontoxic
path: "toxicity/user_prompts/nontoxic.jsonl"
- split: realtoxicityprompts.toxic
path: "toxicity/user_prompts/toxic.jsonl"
- split: toxic.gpt3.5
path: "toxicity/user_prompts/toxic.gpt3.5.jsonl"
- split: toxic.gpt4
path: "toxicity/user_prompts/toxic.gpt4.jsonl"
- config_name: adv_demonstration
data_files:
- split: counterfactual.snliPremiseCf
path: adv_demonstration/counterfactual/snli_premise_cf/42.jsonl
- split: counterfactual.snliHypothesisCf
path: adv_demonstration/counterfactual/snli_hypothesis_cf/42.jsonl
- split: counterfactual.controlRaisingCf
path: adv_demonstration/counterfactual/control_raising_cf/42.jsonl
- split: counterfactual.irregularFormCf
path: adv_demonstration/counterfactual/irregular_form_cf/42.jsonl
- split: counterfactual.mainVerbCf
path: adv_demonstration/counterfactual/main_verb_cf/42.jsonl
- split: counterfactual.syntacticCategoryCf
path: adv_demonstration/counterfactual/syntactic_category_cf/42.jsonl
- split: spurious.PP.entailBias
path: adv_demonstration/spurious/PP/entail-bias/42.jsonl
- split: spurious.PP.nonEntailBias
path: adv_demonstration/spurious/PP/non-entail-bias/42.jsonl
- split: spurious.adverb.entailBias
path: adv_demonstration/spurious/adverb/entail-bias/42.jsonl
- split: spurious.adverb.nonEntailBias
path: adv_demonstration/spurious/adverb/non-entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.entailBias
path: adv_demonstration/spurious/embedded_under_verb/entail-bias/42.jsonl
- split: spurious.embeddedUnderVerb.nonEntailBias
path: adv_demonstration/spurious/embedded_under_verb/non-entail-bias/42.jsonl
- split: spurious.lRelativeClause.entailBias
path: adv_demonstration/spurious/l_relative_clause/entail-bias/42.jsonl
- split: spurious.lRelativeClause.nonEntailBias
path: adv_demonstration/spurious/l_relative_clause/non-entail-bias/42.jsonl
- split: spurious.passive.entailBias
path: adv_demonstration/spurious/passive/entail-bias/42.jsonl
- split: spurious.passive.nonEntailBias
path: adv_demonstration/spurious/passive/non-entail-bias/42.jsonl
- split: spurious.sRelativeClause.entailBias
path: adv_demonstration/spurious/s_relative_clause/entail-bias/42.jsonl
- split: spurious.sRelativeClause.nonEntailBias
path: adv_demonstration/spurious/s_relative_clause/non-entail-bias/42.jsonl
- split: backdoor.sst2.setup1BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_cacc/42.jsonl
- split: backdoor.sst2.setup1BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_badword_asr/42.jsonl
- split: backdoor.sst2.setup2BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_cacc/42.jsonl
- split: backdoor.sst2.setup2BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_badword_asr/42.jsonl
- split: backdoor.sst2.setup3BadwordCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_cacc/42.jsonl
- split: backdoor.sst2.setup3BadwordAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_badword_asr/42.jsonl
- split: backdoor.sst2.setup1AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup1AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_addsent_asr/42.jsonl
- split: backdoor.sst2.setup2AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup2AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_addsent_asr/42.jsonl
- split: backdoor.sst2.setup3AddsentCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_cacc/42.jsonl
- split: backdoor.sst2.setup3AddsentAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_addsent_asr/42.jsonl
- split: backdoor.sst2.setup1SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup1SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup2SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup2SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup3SynbkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_cacc/42.jsonl
- split: backdoor.sst2.setup3SynbkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_synbkd_asr/42.jsonl
- split: backdoor.sst2.setup1StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup1StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup1_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup2StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup2StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup2_stylebkd_asr/42.jsonl
- split: backdoor.sst2.setup3StylebkdCacc
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_cacc/42.jsonl
- split: backdoor.sst2.setup3StylebkdAsr
path: adv_demonstration/backdoor/experiment1/sst-2_setup3_stylebkd_asr/42.jsonl
- config_name: stereotype
data_files:
- split: stereotype
path: "stereotype/dataset/stereotype_bias_data.jsonl"
- config_name: adv-glue-plus-plus
data_files:
- split: sst2
path: "adv-glue-plus-plus/data/sst2.jsonl"
- split: qqp
path: "adv-glue-plus-plus/data/qqp.jsonl"
- split: mnli
path: "adv-glue-plus-plus/data/mnli.jsonl"
- split: mnli_mismatched
path: "adv-glue-plus-plus/data/mnli-mm.jsonl"
- split: qnli
path: "adv-glue-plus-plus/data/qnli.jsonl"
- split: rte
path: "adv-glue-plus-plus/data/rte.jsonl"
- config_name: machine_ethics
data_files:
- split: cm_train
path: "machine_ethics/cm_train.jsonl"
- split: cm_test
path: "machine_ethics/cm_test.jsonl"
- split: deontology_train
path: "machine_ethics/deontology_train.jsonl"
- split: deontology_test
path: "machine_ethics/deontology_test.jsonl"
- split: justice_train
path: "machine_ethics/justice_train.jsonl"
- split: justice_test
path: "machine_ethics/justice_test.jsonl"
- split: util_train
path: "machine_ethics/util_train.jsonl"
- split: util_test
path: "machine_ethics/util_test.jsonl"
- split: virtue_train
path: "machine_ethics/virtue_train.jsonl"
- split: virtue_test
path: "machine_ethics/virtue_test.jsonl"
- split: jiminy_train
path: "machine_ethics/jiminy_train.jsonl"
- split: jiminy_test
path: "machine_ethics/jiminy_test.jsonl"
- split: jiminy_subset
path: "machine_ethics/jiminy_subset.jsonl"
- config_name: privacy
data_files:
- split: enron.context
path: "privacy/enron_data/context.jsonl"
- split: enron.email2name
path: "privacy/enron_data/email2name.jsonl"
- split: enron.one_shot_non_domain
path: "privacy/enron_data/one_shot_non_domain.jsonl"
- split: enron.one_shot
path: "privacy/enron_data/one_shot.jsonl"
- split: enron.two_shot_non_domain
path: "privacy/enron_data/two_shot_non_domain.jsonl"
- split: enron.two_shot
path: "privacy/enron_data/two_shot.jsonl"
- split: enron.five_shot_non_domain
path: "privacy/enron_data/five_shot_non_domain.jsonl"
- split: enron.five_shot
path: "privacy/enron_data/five_shot.jsonl"
- config_name: fairness
data_files:
- split: adult.zero_shot.br_0.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.0.jsonl"
- split: adult.zero_shot.br_0.5
path: "fairness/fairness_data/adult_0_200_test_base_rate_0.5.jsonl"
- split: adult.zero_shot.br_1.0
path: "fairness/fairness_data/adult_0_200_test_base_rate_1.0.jsonl"
- split: adult.few_shot.tr_br_0.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.0.jsonl"
- split: adult.few_shot.tr_br_0.5
path: "fairness/fairness_data/adult_32_200_train_base_rate_0.5.jsonl"
- split: adult.few_shot.tr_br_1.0
path: "fairness/fairness_data/adult_32_200_train_base_rate_1.0.jsonl"
- split: adult.few_shot.num_train_0
path: "fairness/fairness_data/adult_0_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_16
path: "fairness/fairness_data/adult_16_200_train_br_0.0_test_br_0.5.jsonl"
- split: adult.few_shot.num_train_32
path: "fairness/fairness_data/adult_32_200_train_br_0.0_test_br_0.5.jsonl"
- split: crime.zero_shot.br_0.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.0.jsonl"
- split: crime.zero_shot.br_0.5
path: "fairness/fairness_data/crime_0_300_test_base_rate_0.5.jsonl"
- split: crime.zero_shot.br_1.0
path: "fairness/fairness_data/crime_0_300_test_base_rate_1.0.jsonl"
- config_name: ood
data_files:
- split: style
path: "ood/style.jsonl"
- split: knowledge
path: "ood/knowledge.jsonl"
---
# DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
## Overview
This repo contains the source code of DecodingTrust. This research endeavor is designed to help researchers better understand the capabilities, limitations, and potential risks associated with deploying these state-of-the-art Large Language Models (LLMs). See our paper for details.
[**DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models**](https://arxiv.org/abs//2306.11698)
*Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li.*
https://arxiv.org/pdf/2306.11698.pdf
This project is organized around the following **eight** primary areas of trustworthiness, including:
1. Toxicity
2. Stereotype and bias
3. Adversarial robustness
4. Out-of-Distribution Robustness
5. Privacy
6. Robustness to Adversarial Demonstrations
7. Machine Ethics
8. Fairness
## Getting Started
To evaluate using DecodingTrust dataset, please install the DecodingTrust package as below:
### (Conda +) Pip
For now, we suggest installing DecodingTrust by cloning our repository and install it in editable mode. This will keep the data, code, and configurations in the same place.
```bash
git clone https://github.com/AI-secure/DecodingTrust.git && cd DecodingTrust
pip install -e .
```
Please note that this will install PyTorch with `pip`. If your system does not have a `CUDA` version compatible with the PyTorch `pip` wheel. To install `PyTorch` with `Conda` first, as shown below.
```bash
conda create --name dt-test python=3.9 pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
It is also possible to install DecodingTrust as a standalone package, but you will need to clone our repository again to run it will our data.
```bash
conda create --name dt-test python=3.9
conda activate dt-test
pip install "decoding-trust @ git+https://github.com/AI-secure/DecodingTrust.git"
```
### Support for the `ppc64le` Architecture
We also support the `ppc64le` architecture of IBM Power-9 platforms. To install on this platform, please first make sure you have the following `conda` channels so that we can utilize pre-built packages.
```
--add channels 'defaults' # lowest priority
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/'
--add channels 'https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/'
--add channels 'https://opence.mit.edu'
--add channels 'https://ftp.osuosl.org/pub/open-ce/current/'
--add channels 'conda-forge' # highest priority
```
Then, install the following pre-built packages.
```bash
mamba create --name dt-test python==3.9 pytorch=2.0.1 torchvision=0.15.2 spacy=3.5.3 scipy=1.10.1 fairlearn~=0.9.0 scikit-learn~=1.1.2 pandas~=2.0.3 pyarrow~=11.0.0 rust -c conda-forge
```
Finally, install DecodingTrust with `pip` as usual.
### Docker / Singularity
To use DecodingTrust with docker, simply pull the following docker image.
```bash
sudo docker pull danielz01/decoding-trust
docker run -it \
-v /path/on/host:/path/in/container \
--gpus all \
decoding-trust/v1.0:latest [arg1 arg2 ...]
```
To use it in through singularity or apptainer container environments on HPC environments, simply run the following.
```bash
module load singularity # Change it to whatever module name your singularity / apptainer environment was given
singularity pull decoding-trust-v1.0.sif docker://danielz01/decoding-trust
singularity exec --nv --bind /path/on/host:/path/in/container decoding-trust-v1.0.sif [arg1 arg2]
```
We will also have a container build for `ppc64le` platforms soon. Stay tuned!
### Notes
+ Each of the eight areas has its own subdirectory containing the respective code and README.
+ Follow the specific `README`: Every subdirectory has its own README. Refer to these documents for information on how to run the scripts and interpret the results.
## [Important] Candidate models
In our benchmark, to have consistent conclusions and results, currently we mianly focus on evaluating the following two OpenAI models:
- `gpt-3.5-turbo-0301`
- `gpt-4-0314`
**Note we use `gpt-3.5-turbo-0301` (with time stamp) released in March instead of `gpt-3.5-turbo` for sake of model evolution to ensure reproducibility.**
Currently, we have supported evaluating all the causal LLMs **hosted in Huggingface** or hosted locally. Specifically, we have tested the following open LLMs:
- `Llama-v2-7B-Chat`
- `Vicuna-7BAlpaca-7B`
- `MPT-7B`
- `Falcon-7B`
- `Alpaca-7B`
- `RedPajama-INCITE-7B-Instruct`
## Tutorial
We have provided a [Tutorial](Tutorial.md) to help you walk through the usage of API to evaluate different trustworthiness perspectives and LLMs.
## Useful tips
- Please first evaluate your experiments with `++dry_run=True` flags on to check the input / output format, and use `gpt-3.5-turbo-0301` to check the generation since it has lower costs.
- Suggesting saving the responses from OpenAI.
## File usage
- `main.py` provides a unified entry point to evaluate all the perspectives and different LLMs with proper configuration
- `chat.py` provides robust APIs for creating requests to OpenAI **Chat Compleition** models and Huggingface autoregressive LLMs. Recommend implementing experiments based on this file. If you think `chat.py` is not good enough and want to make modifications, please let @acphile and @boxinw know.
- `utils.py` provide auxiliary functions
For other files, please refer to each subdirs for more information.
## License
This project is licensed under the [CC BY-SA 4.0 ]("http://creativecommons.org/licenses/by-sa/4.0/legalcode") - see the LICENSE file for details.
## Citation
Please cite the paper as follows if you use the data or code from DecodingTrust:
```
@article{wang2023decodingtrust,
title={DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models},
author={Wang, Boxin and Chen, Weixin and Pei, Hengzhi and Xie, Chulin and Kang, Mintong and Zhang, Chenhui and Xu, Chejian and Xiong, Zidi and Dutta, Ritik and Schaeffer, Rylan and others},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023}
}
```
## Contact
Please reach out to us if you have any questions or suggestions. You can submit an issue or pull request, or send an email to boxinw2@illinois.edu.
Thank you for your interest in DecodingTrust. We hope our work will contribute to a more trustworthy, fair, and robust AI future. |
kunishou/do-not-answer-ja | ---
license: cc-by-nc-sa-4.0
---
This dataset was created by automatically translating "do-not-answer" into Japanese.
This dataset is licensed under CC-BY-NC-SA-4.0
do-not-answer-ja
https://github.com/kunishou/do-not-answer-ja
do-not-answer
https://github.com/Libr-AI/do-not-answer |
google/fleurs | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from
10+ language families, 3 different domains and 4 task families: speech recognition,
translation, classification and retrieval.'
tags:
- speech-recognition
---
# FLEURS
## Dataset Description
- **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- **Paper:** [FLEURS: Few-shot Learning Evaluation of
Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
- **Total amount of disk used:** ca. 350 GB
Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## How to use & Supported Tasks
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi):
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train", streaming=True)
print(next(iter(fleurs)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
batch_sampler = BatchSampler(RandomSampler(fleurs), batch_size=32, drop_last=False)
dataloader = DataLoader(fleurs, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
dataloader = DataLoader(fleurs, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
Fine-tune your own Language Identification models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
### 1. Speech Recognition (ASR)
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
### 2. Language Identification
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 3. Retrieval
Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
We show detailed information the example configurations `af_za` of the dataset.
All other configurations have the same structure.
### Data Instances
**af_za**
- Size of downloaded dataset files: 1.47 GB
- Size of the generated dataset: 1 MB
- Total amount of disk used: 1.47 GB
An example of a data instance of the config `af_za` looks as follows:
```
{'id': 91,
'num_samples': 385920,
'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
-1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
'sampling_rate': 16000},
'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'gender': 0,
'lang_id': 0,
'language': 'Afrikaans',
'lang_group_id': 3}
```
### Data Fields
The data fields are the same among all splits.
- **id** (int): ID of audio sample
- **num_samples** (int): Number of float values
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **raw_transcription** (str): The non-normalized transcription of the audio file
- **transcription** (str): Transcription of the audio file
- **gender** (int): Class id of gender
- **lang_id** (int): Class id of language
- **lang_group_id** (int): Class id of language group
### Data Splits
Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
## Dataset Creation
We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
train, dev and test respectively.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
### Other Known Limitations
The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
Please cite the paper when referencing the FLEURS corpus as:
```
@article{fleurs2022arxiv,
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
journal={arXiv preprint arXiv:2205.12446},
url = {https://arxiv.org/abs/2205.12446},
year = {2022},
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
|
fcakyon/pokemon-classification | ---
task_categories:
- image-classification
tags:
- roboflow
- roboflow2huggingface
- Gaming
---
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="https://huggingface.co/datasets/fcakyon/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Golbat', 'Machoke', 'Omastar', 'Diglett', 'Lapras', 'Kabuto', 'Persian', 'Weepinbell', 'Golem', 'Dodrio', 'Raichu', 'Zapdos', 'Raticate', 'Magnemite', 'Ivysaur', 'Growlithe', 'Tangela', 'Drowzee', 'Rapidash', 'Venonat', 'Pidgeot', 'Nidorino', 'Porygon', 'Lickitung', 'Rattata', 'Machop', 'Charmeleon', 'Slowbro', 'Parasect', 'Eevee', 'Starmie', 'Staryu', 'Psyduck', 'Dragonair', 'Magikarp', 'Vileplume', 'Marowak', 'Pidgeotto', 'Shellder', 'Mewtwo', 'Farfetchd', 'Kingler', 'Seel', 'Kakuna', 'Doduo', 'Electabuzz', 'Charmander', 'Rhyhorn', 'Tauros', 'Dugtrio', 'Poliwrath', 'Gengar', 'Exeggutor', 'Dewgong', 'Jigglypuff', 'Geodude', 'Kadabra', 'Nidorina', 'Sandshrew', 'Grimer', 'MrMime', 'Pidgey', 'Koffing', 'Ekans', 'Alolan Sandslash', 'Venusaur', 'Snorlax', 'Paras', 'Jynx', 'Chansey', 'Hitmonchan', 'Gastly', 'Kangaskhan', 'Oddish', 'Wigglytuff', 'Graveler', 'Arcanine', 'Clefairy', 'Articuno', 'Poliwag', 'Abra', 'Squirtle', 'Voltorb', 'Ponyta', 'Moltres', 'Nidoqueen', 'Magmar', 'Onix', 'Vulpix', 'Butterfree', 'Krabby', 'Arbok', 'Clefable', 'Goldeen', 'Magneton', 'Dratini', 'Caterpie', 'Jolteon', 'Nidoking', 'Alakazam', 'Dragonite', 'Fearow', 'Slowpoke', 'Weezing', 'Beedrill', 'Weedle', 'Cloyster', 'Vaporeon', 'Gyarados', 'Golduck', 'Machamp', 'Hitmonlee', 'Primeape', 'Cubone', 'Sandslash', 'Scyther', 'Haunter', 'Metapod', 'Tentacruel', 'Aerodactyl', 'Kabutops', 'Ninetales', 'Zubat', 'Rhydon', 'Mew', 'Pinsir', 'Ditto', 'Victreebel', 'Omanyte', 'Horsea', 'Pikachu', 'Blastoise', 'Venomoth', 'Charizard', 'Seadra', 'Muk', 'Spearow', 'Bulbasaur', 'Bellsprout', 'Electrode', 'Gloom', 'Poliwhirl', 'Flareon', 'Seaking', 'Hypno', 'Wartortle', 'Mankey', 'Tentacool', 'Exeggcute', 'Meowth']
```
### Number of Images
```json
{'train': 4869, 'test': 732, 'valid': 1390}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
|
shpotes/waxal-wolof | ---
license: cc-by-sa-4.0
---
|
bigbio/n2c2_2006_deid |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2006 De-identification
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for n2c2 2006 De-identification
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The data for the de-identification challenge came from Partners Healthcare and
included solely medical discharge summaries. We prepared the data for the
challengeby annotating and by replacing all authentic PHI with realistic
surrogates.
Given the above definitions, we marked the authentic PHI in the records in two stages.
In the first stage, we used an automatic system.31 In the second stage, we validated
the output of the automatic system manually. Three annotators, including undergraduate
and graduate students and a professor, serially made three manual passes over each record.
They marked and discussed the PHI tags they disagreed on and finalized these tags
after discussion.
The original dataset does not have spans for each entity. The spans are
computed in this loader and the final text that correspond with the
tags is preserved in the source format
## Citation Information
```
@article{uzuner2007evaluating,
author = {
Uzuner, Özlem and
Luo, Yuan and
Szolovits, Peter
},
title = {Evaluating the State-of-the-Art in Automatic De-identification},
journal = {Journal of the American Medical Informatics Association},
volume = {14},
number = {5},
pages = {550-563},
year = {2007},
month = {09},
url = {https://doi.org/10.1197/jamia.M2444},
doi = {10.1197/jamia.M2444},
eprint = {https://academic.oup.com/jamia/article-pdf/14/5/550/2136261/14-5-550.pdf}
}
```
|
presencesw/multinli_neutral | ---
dataset_info:
features:
- name: gold_label
dtype: string
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 69316627
num_examples: 274830
- name: dev_matched
num_bytes: 1889996
num_examples: 9815
- name: dev_mismatched
num_bytes: 2005539
num_examples: 9832
download_size: 30487282
dataset_size: 73212162
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev_matched
path: data/dev_matched-*
- split: dev_mismatched
path: data/dev_mismatched-*
---
|
rakesh46/wav2vec2-large-xls-r-300m-hindi-colab | ---
license: c-uda
---
|
jjonhwa/raw4_v1 | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_start
dtype: int64
splits:
- name: train
num_bytes: 88688042
num_examples: 65987
download_size: 12238312
dataset_size: 88688042
---
# Dataset Card for "raw4_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alesanm/chanel_short_descriptions | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 75596164.0
num_examples: 49
download_size: 75594184
dataset_size: 75596164.0
---
# Dataset Card for "chanel_short_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
El-chapoo/Complex_data-v1.3 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2004650699
num_examples: 4747311
download_size: 1041171652
dataset_size: 2004650699
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
G-Bhuvanesh/indian_food_images | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burger
'1': butter_naan
'2': chai
'3': chapati
'4': chole_bhature
'5': dal_makhani
'6': dhokla
'7': fried_rice
'8': idli
'9': jalebi
'10': kaathi_rolls
'11': kadai_paneer
'12': kulfi
'13': masala_dosa
'14': momos
'15': paani_puri
'16': pakode
'17': pav_bhaji
'18': pizza
'19': samosa
splits:
- name: train
num_bytes: 1585470011.6082501
num_examples: 5327
- name: test
num_bytes: 262239863.72574985
num_examples: 941
download_size: 1600405916
dataset_size: 1847709875.334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
LLMao/2024_03_10_05_44_44_Archive | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: content
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 617185
num_examples: 180
download_size: 115519
dataset_size: 617185
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Isaak-Carter/JOSIE_v928.15 | ---
dataset_info:
features:
- name: sample
dtype: string
splits:
- name: train
num_bytes: 6512059
num_examples: 2348
download_size: 0
dataset_size: 6512059
---
# Dataset Card for "JOSIE_v928.15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
michelcarroll/llama2-earnings-stock-prediction-fine-tune-v3 | ---
dataset_info:
features:
- name: completion
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 87920323
num_examples: 111140
- name: development
num_bytes: 26603449
num_examples: 33284
- name: test
num_bytes: 840735
num_examples: 1000
download_size: 47167270
dataset_size: 115364507
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: development
path: data/development-*
- split: test
path: data/test-*
---
|
firojm57/first-dataset | ---
license: mit
---
<s>
<INST>
<<SYS>> This is alta view <</SYS>>
What is Policy?
</INST>
Policy is a set of rules to protect an asset
</s> |
Nadav/pixel_glue_wnli_high_noise | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: validation
num_bytes: 2693300.0
num_examples: 71
download_size: 2693542
dataset_size: 2693300.0
---
# Dataset Card for "pixel_glue_wnli_high_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Sandipan1994/eqasc_data | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 11336048
num_examples: 84964
- name: validation
num_bytes: 1296119
num_examples: 9710
- name: test
num_bytes: 1259181
num_examples: 9630
download_size: 4494168
dataset_size: 13891348
---
# Dataset Card for "eqasc_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic | ---
pretty_name: Evaluation run of fhai50032/BeagleLake-7B-Toxic
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [fhai50032/BeagleLake-7B-Toxic](https://huggingface.co/fhai50032/BeagleLake-7B-Toxic)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T23:34:39.429099](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic/blob/main/results_2024-02-09T23-34-39.429099.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6318413962067819,\n\
\ \"acc_stderr\": 0.032498981232405,\n \"acc_norm\": 0.6321479053629802,\n\
\ \"acc_norm_stderr\": 0.03317236474623438,\n \"mc1\": 0.4173806609547124,\n\
\ \"mc1_stderr\": 0.017262891063272178,\n \"mc2\": 0.5766565175013683,\n\
\ \"mc2_stderr\": 0.01543784468587398\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6279863481228669,\n \"acc_stderr\": 0.01412459788184446,\n\
\ \"acc_norm\": 0.6518771331058021,\n \"acc_norm_stderr\": 0.013921008595179342\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6484763991236805,\n\
\ \"acc_stderr\": 0.004764703145680276,\n \"acc_norm\": 0.8382792272455686,\n\
\ \"acc_norm_stderr\": 0.0036744197993536704\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.03782728980865469,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.03782728980865469\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n\
\ \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \
\ \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569526,\n\
\ \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569526\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7430555555555556,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.7430555555555556,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\"\
: 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n\
\ \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n\
\ \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n\
\ \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5361702127659574,\n \"acc_stderr\": 0.03260038511835771,\n\
\ \"acc_norm\": 0.5361702127659574,\n \"acc_norm_stderr\": 0.03260038511835771\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n\
\ \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n\
\ \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41534391534391535,\n \"acc_stderr\": 0.0253795249107784,\n \"\
acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.0253795249107784\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n\
\ \"acc_stderr\": 0.02413763242933771,\n \"acc_norm\": 0.7645161290322581,\n\
\ \"acc_norm_stderr\": 0.02413763242933771\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n\
\ \"acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"\
acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.02423353229775873,\n\
\ \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.02423353229775873\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6282051282051282,\n \"acc_stderr\": 0.024503472557110936,\n\
\ \"acc_norm\": 0.6282051282051282,\n \"acc_norm_stderr\": 0.024503472557110936\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.32592592592592595,\n \"acc_stderr\": 0.028578348365473075,\n \
\ \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.028578348365473075\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886786,\n\
\ \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886786\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"\
acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8293577981651377,\n \"acc_stderr\": 0.01612927102509986,\n \"\
acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.01612927102509986\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7941176470588235,\n\
\ \"acc_stderr\": 0.028379449451588667,\n \"acc_norm\": 0.7941176470588235,\n\
\ \"acc_norm_stderr\": 0.028379449451588667\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621133,\n\
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621133\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n\
\ \"acc_stderr\": 0.031911001928357954,\n \"acc_norm\": 0.6547085201793722,\n\
\ \"acc_norm_stderr\": 0.031911001928357954\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.03768335959728743,\n\
\ \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.03768335959728743\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n\
\ \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \
\ \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8135376756066411,\n\
\ \"acc_stderr\": 0.013927751372001506,\n \"acc_norm\": 0.8135376756066411,\n\
\ \"acc_norm_stderr\": 0.013927751372001506\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.024685316867257803,\n\
\ \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.024685316867257803\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3340782122905028,\n\
\ \"acc_stderr\": 0.015774911422381632,\n \"acc_norm\": 0.3340782122905028,\n\
\ \"acc_norm_stderr\": 0.015774911422381632\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n\
\ \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.02563082497562136,\n\
\ \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.02563082497562136\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \
\ \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n\
\ \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n\
\ \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.02888819310398863,\n\
\ \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.02888819310398863\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6486928104575164,\n \"acc_stderr\": 0.019312676065786554,\n \
\ \"acc_norm\": 0.6486928104575164,\n \"acc_norm_stderr\": 0.019312676065786554\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291296,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291296\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n\
\ \"acc_stderr\": 0.024845753212306053,\n \"acc_norm\": 0.8557213930348259,\n\
\ \"acc_norm_stderr\": 0.024845753212306053\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4173806609547124,\n\
\ \"mc1_stderr\": 0.017262891063272178,\n \"mc2\": 0.5766565175013683,\n\
\ \"mc2_stderr\": 0.01543784468587398\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8232044198895028,\n \"acc_stderr\": 0.01072192328791875\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6360879454131918,\n \
\ \"acc_stderr\": 0.013252539227966197\n }\n}\n```"
repo_url: https://huggingface.co/fhai50032/BeagleLake-7B-Toxic
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|arc:challenge|25_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|gsm8k|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hellaswag|10_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-34-39.429099.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T23-34-39.429099.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- '**/details_harness|winogrande|5_2024-02-09T23-34-39.429099.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T23-34-39.429099.parquet'
- config_name: results
data_files:
- split: 2024_02_09T23_34_39.429099
path:
- results_2024-02-09T23-34-39.429099.parquet
- split: latest
path:
- results_2024-02-09T23-34-39.429099.parquet
---
# Dataset Card for Evaluation run of fhai50032/BeagleLake-7B-Toxic
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fhai50032/BeagleLake-7B-Toxic](https://huggingface.co/fhai50032/BeagleLake-7B-Toxic) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T23:34:39.429099](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B-Toxic/blob/main/results_2024-02-09T23-34-39.429099.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6318413962067819,
"acc_stderr": 0.032498981232405,
"acc_norm": 0.6321479053629802,
"acc_norm_stderr": 0.03317236474623438,
"mc1": 0.4173806609547124,
"mc1_stderr": 0.017262891063272178,
"mc2": 0.5766565175013683,
"mc2_stderr": 0.01543784468587398
},
"harness|arc:challenge|25": {
"acc": 0.6279863481228669,
"acc_stderr": 0.01412459788184446,
"acc_norm": 0.6518771331058021,
"acc_norm_stderr": 0.013921008595179342
},
"harness|hellaswag|10": {
"acc": 0.6484763991236805,
"acc_stderr": 0.004764703145680276,
"acc_norm": 0.8382792272455686,
"acc_norm_stderr": 0.0036744197993536704
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.03782728980865469,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.03782728980865469
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7056603773584905,
"acc_stderr": 0.02804918631569526,
"acc_norm": 0.7056603773584905,
"acc_norm_stderr": 0.02804918631569526
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7430555555555556,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.7430555555555556,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.037336266553835096,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.037336266553835096
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5361702127659574,
"acc_stderr": 0.03260038511835771,
"acc_norm": 0.5361702127659574,
"acc_norm_stderr": 0.03260038511835771
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4298245614035088,
"acc_stderr": 0.04657047260594963,
"acc_norm": 0.4298245614035088,
"acc_norm_stderr": 0.04657047260594963
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.0253795249107784,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.0253795249107784
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7645161290322581,
"acc_stderr": 0.02413763242933771,
"acc_norm": 0.7645161290322581,
"acc_norm_stderr": 0.02413763242933771
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.02423353229775873,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.02423353229775873
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6282051282051282,
"acc_stderr": 0.024503472557110936,
"acc_norm": 0.6282051282051282,
"acc_norm_stderr": 0.024503472557110936
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.028578348365473075,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.028578348365473075
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886786,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886786
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.01612927102509986,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.01612927102509986
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.028379449451588667,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.028379449451588667
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621133,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621133
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6547085201793722,
"acc_stderr": 0.031911001928357954,
"acc_norm": 0.6547085201793722,
"acc_norm_stderr": 0.031911001928357954
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7557251908396947,
"acc_stderr": 0.03768335959728743,
"acc_norm": 0.7557251908396947,
"acc_norm_stderr": 0.03768335959728743
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8135376756066411,
"acc_stderr": 0.013927751372001506,
"acc_norm": 0.8135376756066411,
"acc_norm_stderr": 0.013927751372001506
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.024685316867257803,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.024685316867257803
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3340782122905028,
"acc_stderr": 0.015774911422381632,
"acc_norm": 0.3340782122905028,
"acc_norm_stderr": 0.015774911422381632
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.02563082497562136,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.02563082497562136
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4432624113475177,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.4432624113475177,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4511082138200782,
"acc_stderr": 0.012709037347346233,
"acc_norm": 0.4511082138200782,
"acc_norm_stderr": 0.012709037347346233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.02888819310398863,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.02888819310398863
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6486928104575164,
"acc_stderr": 0.019312676065786554,
"acc_norm": 0.6486928104575164,
"acc_norm_stderr": 0.019312676065786554
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291296,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291296
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.024845753212306053,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.024845753212306053
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4173806609547124,
"mc1_stderr": 0.017262891063272178,
"mc2": 0.5766565175013683,
"mc2_stderr": 0.01543784468587398
},
"harness|winogrande|5": {
"acc": 0.8232044198895028,
"acc_stderr": 0.01072192328791875
},
"harness|gsm8k|5": {
"acc": 0.6360879454131918,
"acc_stderr": 0.013252539227966197
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
cquaker/yi-bagel-dpo | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: source
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 655407611
num_examples: 192036
download_size: 369017835
dataset_size: 655407611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
daisyjojo/deeprx_zipped | ---
license: other
---
|
one-sec-cv12/chunk_218 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 20988505008.875
num_examples: 218521
download_size: 19838852943
dataset_size: 20988505008.875
---
# Dataset Card for "chunk_218"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sethapun/arithmetic_2md_1to250 | ---
dataset_info:
features:
- name: expression
dtype: string
- name: answer
dtype: float64
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 60236
num_examples: 2000
- name: validation
num_bytes: 11988
num_examples: 400
download_size: 32920
dataset_size: 72224
---
# Dataset Card for "arithmetic_2md_1to250"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
deman539/celebrity_in_movie_demo | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: output
splits:
- name: train
num_bytes: 2237547.0
num_examples: 5
download_size: 1373409
dataset_size: 2237547.0
---
# Dataset Card for "celebrity_in_movie_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KJohnes/CMP_facade_DB_base | ---
license: unknown
---
|
Falah/artist_rooms_descriptions | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 638372
num_examples: 1000
download_size: 54548
dataset_size: 638372
---
# Dataset Card for "artist_rooms_descriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SeyedAli/Persian-Text-Sentiment | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10222986
num_examples: 55852
- name: test
num_bytes: 2575303
num_examples: 13964
download_size: 6076096
dataset_size: 12798289
task_categories:
- text-classification
language:
- fa
---
Dataset Classes
* negetive :0
* positive :1 |
damerajee/pretrained_large | ---
language:
- hi
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8246398416
num_examples: 1463327
download_size: 3089711172
dataset_size: 8246398416
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shirsh10mall/First_LLM_Project | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: length before preprocessing
dtype: int64
splits:
- name: train
num_bytes: 6081435886.2271385
num_examples: 3587162
download_size: 2467698839
dataset_size: 6081435886.2271385
---
# Dataset Card for "First_LLM_Project"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o | ---
pretty_name: Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-06T15:43:11.163444](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public/blob/main/results_2023-11-06T15-43-11.163444.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.37458053691275167,\n\
\ \"em_stderr\": 0.004956760684602152,\n \"f1\": 0.41704173657718185,\n\
\ \"f1_stderr\": 0.004847488019820457,\n \"acc\": 0.45805311598499976,\n\
\ \"acc_stderr\": 0.010642754511101384\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.37458053691275167,\n \"em_stderr\": 0.004956760684602152,\n\
\ \"f1\": 0.41704173657718185,\n \"f1_stderr\": 0.004847488019820457\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14025777103866566,\n \
\ \"acc_stderr\": 0.009565108281428666\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774104\n\
\ }\n}\n```"
repo_url: https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_05T07_54_10.919689
path:
- '**/details_harness|drop|3_2023-11-05T07-54-10.919689.parquet'
- split: 2023_11_06T15_43_11.163444
path:
- '**/details_harness|drop|3_2023-11-06T15-43-11.163444.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-06T15-43-11.163444.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_05T07_54_10.919689
path:
- '**/details_harness|gsm8k|5_2023-11-05T07-54-10.919689.parquet'
- split: 2023_11_06T15_43_11.163444
path:
- '**/details_harness|gsm8k|5_2023-11-06T15-43-11.163444.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-06T15-43-11.163444.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_05T07_54_10.919689
path:
- '**/details_harness|winogrande|5_2023-11-05T07-54-10.919689.parquet'
- split: 2023_11_06T15_43_11.163444
path:
- '**/details_harness|winogrande|5_2023-11-06T15-43-11.163444.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-06T15-43-11.163444.parquet'
- config_name: results
data_files:
- split: 2023_11_05T07_54_10.919689
path:
- results_2023-11-05T07-54-10.919689.parquet
- split: 2023_11_06T15_43_11.163444
path:
- results_2023-11-06T15-43-11.163444.parquet
- split: latest
path:
- results_2023-11-06T15-43-11.163444.parquet
---
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T15:43:11.163444](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public/blob/main/results_2023-11-06T15-43-11.163444.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.37458053691275167,
"em_stderr": 0.004956760684602152,
"f1": 0.41704173657718185,
"f1_stderr": 0.004847488019820457,
"acc": 0.45805311598499976,
"acc_stderr": 0.010642754511101384
},
"harness|drop|3": {
"em": 0.37458053691275167,
"em_stderr": 0.004956760684602152,
"f1": 0.41704173657718185,
"f1_stderr": 0.004847488019820457
},
"harness|gsm8k|5": {
"acc": 0.14025777103866566,
"acc_stderr": 0.009565108281428666
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774104
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
distilled-one-sec-cv12-each-chunk-uniq/chunk_1 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 943702952.0
num_examples: 183886
download_size: 961352514
dataset_size: 943702952.0
---
# Dataset Card for "chunk_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vargr/youtube | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: channelId
dtype: string
- name: videoId
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: views
dtype: int64
- name: url
dtype: string
- name: publishDate
dtype: timestamp[us]
- name: lengthSeconds
dtype: int64
- name: subscriberCount
dtype: int64
- name: videoCount
dtype: int64
- name: isVerified
dtype: bool
- name: keywords
dtype: string
- name: country
dtype: string
splits:
- name: train
num_bytes: 75475502
num_examples: 130854
download_size: 18930001
dataset_size: 75475502
---
#Youtube Dataset
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
interstellarninja/tool-calls-eval | ---
dataset_info:
features:
- name: system
dtype: string
- name: user
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
dtype: string
- name: tools
dtype: string
splits:
- name: train
num_bytes: 174019
num_examples: 100
download_size: 54818
dataset_size: 174019
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
huggingartists/slava-kpss | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/slava-kpss"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.88329 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e63e3a804916ed71bf2941ac4e190063.847x847x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/slava-kpss">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Слава КПСС (Slava KPSS)</div>
<a href="https://genius.com/artists/slava-kpss">
<div style="text-align: center; font-size: 14px;">@slava-kpss</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/slava-kpss).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/slava-kpss")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|897| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/slava-kpss")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
aytvill/plastic-recycling-codes | ---
license: mit
task_categories:
- object-detection
size_categories:
- n<1K
---
Plastic recycling codes |
chats-bug/test-image-caption-Listed | ---
license: mit
---
|
nuprl-staging/humaneval-py-mutants | ---
dataset_info:
features:
- name: name
dtype: string
- name: language
dtype: string
- name: tests
dtype: string
- name: prompt
dtype: string
- name: stop_tokens
sequence: string
- name: correct
dtype: string
- name: mutants
sequence: string
- name: errors
sequence: string
splits:
- name: train
num_bytes: 657021
num_examples: 141
download_size: 0
dataset_size: 657021
---
# Dataset Card for "humaneval-py-mutants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ronakpatidar2307/lol_dataset | ---
license: mit
---
|
vishalsmb/vishalsmb-llama2-ner-adsabs-WIESP2022-NER | ---
dataset_info:
features:
- name: bibcode
dtype: string
- name: label_studio_id
dtype: int64
- name: ner_ids
sequence: int64
- name: ner_tags
sequence: string
- name: section
dtype: string
- name: tokens
sequence: string
- name: unique_id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10189142
num_examples: 1000
download_size: 2290254
dataset_size: 10189142
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vanesa1221/admision-unsaac | ---
task_categories:
- question-answering
language:
- es
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
r0ll/ShadowFiend | ---
license: openrail
language:
- ru
--- |
pdjewell/medical_whisper_finetune_dataset | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: sentence
dtype: string
- name: audio
struct:
- name: sample_rate
dtype: int64
- name: waveform
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1181457977
num_examples: 385
download_size: 279518497
dataset_size: 1181457977
---
# Dataset Card for "medical_whisper_finetune_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/du_yaoye_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Du Yaoye/ドゥ/杜遥夜 (Arknights)
This is the dataset of Du Yaoye/ドゥ/杜遥夜 (Arknights), containing 36 images and their tags.
The core tags of this character are `animal_ears, tiger_ears, tiger_girl, breasts, long_hair, hair_rings, brown_hair, brown_eyes, tail, blonde_hair, tiger_tail, animal_ear_fluff, tassel`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 36 | 57.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/du_yaoye_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 36 | 48.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/du_yaoye_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 88 | 97.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/du_yaoye_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/du_yaoye_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, puffy_long_sleeves, solo, black_shorts, chinese_clothes, cowboy_shot, feather_boa, pelvic_curtain, white_dress, closed_mouth, looking_at_viewer, short_shorts, simple_background, thigh_strap, white_background, hair_between_eyes, hand_up, medium_breasts, open_mouth, thighs, white_thighhighs, yellow_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | puffy_long_sleeves | solo | black_shorts | chinese_clothes | cowboy_shot | feather_boa | pelvic_curtain | white_dress | closed_mouth | looking_at_viewer | short_shorts | simple_background | thigh_strap | white_background | hair_between_eyes | hand_up | medium_breasts | open_mouth | thighs | white_thighhighs | yellow_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:-------|:---------------|:------------------|:--------------|:--------------|:-----------------|:--------------|:---------------|:--------------------|:---------------|:--------------------|:--------------|:-------------------|:--------------------|:----------|:-----------------|:-------------|:---------|:-------------------|:--------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
abdalrahmanshahrour/autotrain-data-auto-arabic-summarization | ---
task_categories:
- summarization
---
# AutoTrain Dataset for project: auto-arabic-summarization
## Dataset Description
This dataset has been automatically processed by AutoTrain for project auto-arabic-summarization.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0627\u0643\u062f \u0648\u0632\u064a\u0631 \u0627\u0644\u0635\u0646\u0627\u0639\u0647 \u0648\u0627\u0644\u0637\u0627\u0642\u0647 \u0648\u0627\u0644\u0645\u0646\u0627\u062c\u0645 \u0632\u0643\u0631\u064a\u0627 \u062d\u0645\u062f \u0627\u0646\u0647 \u062a\u0645 \u0627\u0644\u064a\u0648\u0645 \u0627\u0644\u062e\u0645\u064a\u0633 \u062e\u0644\u0627\u0644 \u062c\u0644\u0633\u0647 \u0627\u0644\u062a\u0627\u0645\u062a \u0628\u0627\u0644\u0639\u0627\u0635\u0645\u0647 \u0648\u0632\u064a\u0631 \u0627\u0644\u0637\u0627\u0642\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u064a \u0635\u0627\u0644\u062d \u062e\u0628\u0631\u064a \u0628\u062e\u0635\u0648\u0635 \u0627\u0634\u063a\u0627\u0644 \u0627\u0644\u0644\u062c\u0646\u0647 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a\u0647 \u0645\u062c\u0627\u0644 \u0627\u0644\u0637\u0627\u0642\u0647 \u0644\u062a\u0642\u064a\u064a\u0645 \u0645\u062f\u0649 \u062a\u0637\u0628\u064a\u0642 \u0627\u0644\u0628\u0631\u0627\u0645\u062c \u0627\u0644\u0645\u062a\u0641\u0642 \u0639\u0644\u064a\u0647\u0627 \u062e\u0628\u0631\u0627\u0621 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0627\u0644\u0627\u062a\u0641\u0627\u0642 \u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0645\u0646\u0627\u0637\u0642 \u0627\u0644\u062d\u062f\u0648\u062f\u064a\u0647 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0628\u0627\u0644\u0643\u0645\u064a\u0627\u062a \u0627\u0644\u0643\u0627\u0641\u064a\u0647 \u0642\u0648\u0627\u0631\u064a\u0631 \u0627\u0644\u063a\u0627\u0632 \u0627\u0644\u0645\u0646\u0632\u0644\u064a \u062a\u0642\u062f\u0631 \u0628\u062d\u0648\u0627\u0644\u064a \u0637\u0646 \u0627\u0644\u0642\u0648\u0627\u0631\u064a\u0631 \u0648\u0627\u0636\u0627\u0641 \u062d\u0645\u062f \u0627\u0646\u0647 \u0627\u0644\u0646\u0642\u0627\u0637 \u062a\u0645 \u0627\u0644\u0627\u062a\u0641\u0627\u0642 \u0628\u0634\u0627\u0646\u0647\u0627 \u062c\u0644\u0633\u0647 \u0627\u0644\u064a\u0648\u0645 \u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0633\u0648\u0642 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0628\u0627\u0644\u063a\u0627\u0632 \u0627\u0644\u0637\u0628\u064a\u0639\u064a \u0639\u0628\u0631 \u0627\u0644\u0627\u0646\u0627\u0628\u064a\u0628 \u0648\u062a\u0632\u0648\u064a\u062f \u0627\u0644\u0645\u0646\u0627\u0637\u0642 \u0628\u0627\u0644\u0628\u062a\u0631\u0648\u0644 \u0627\u0644\u0645\u0633\u0627\u0644 \u0627\u0636\u0627\u0641\u0647 \u0627\u0644\u0649 \u062f\u0639\u0645 \u0627\u0644\u062a\u0639\u0627\u0648\u0646 \u0627\u0644\u0645\u062c\u0627\u0644 \u0627\u0644\u062a\u062c\u0627\u0631\u064a \u062a\u0645 \u0627\u0645\u0636\u0627\u0621 \u0645\u0630\u0643\u0631\u0647 \u062a\u0641\u0627\u0647\u0645 \u0639\u0642\u062f \u0644\u062a\u0643\u0648\u064a\u0646 \u062a\u0642\u0646\u0646\u064a\u0646 \u062a\u0648\u0646\u0633\u064a\u064a\u0646 \u0627\u0644\u062c\u0632\u0627\u0626\u0631",
"target": "\u0643\u0645\u0627 \u062a\u0645 \u0627\u0645\u0636\u0627\u0621 \u0645\u0630\u0643\u0631\u0629 \u062a\u0641\u0627\u0647\u0645 \u0639\u0642\u062f \u0644\u062a\u0643\u0648\u064a\u0646 \u062a\u0642\u0646\u0646\u064a\u0646 \u062a\u0648\u0646\u0633\u064a\u064a\u0646 \u0641\u064a \u0627\u0644\u062c\u0632\u0627\u0626\u0631 ."
},
{
"text": "\u0642\u0627\u0644 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0627\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0627\u062b\u0631 \u0644\u0642\u0627\u0621 \u062c\u0645\u0639\u0647 \u0628\u0631\u0626\u064a\u0633 \u0645\u062c\u0644\u0633 \u0646\u0648\u0627\u0628 \u0627\u0644\u0634\u0639\u0628 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631 \u0627\u0644\u0639\u0644\u0627\u0642\u0627\u062a \u0627\u0644\u062b\u0646\u0627\u0626\u064a\u0647 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0645\u0645\u064a\u0632\u0647 \u0648\u0633\u062a\u0643\u0648\u0646 \u0627\u062d\u0633\u0646 \u062e\u0644\u0627\u0644 \u0627\u0644\u0641\u062a\u0631\u0647 \u0627\u0644\u0642\u0627\u062f\u0645\u0647 \u0648\u0627\u0636\u0627\u0641 \u062a\u0635\u0631\u064a\u062d \u0644\u0645\u0631\u0627\u0633\u0644 \u0627\u0644\u062c\u0648\u0647\u0631\u0647 \u0627\u0641 \u0627\u0645 \u0627\u0646\u0647 \u0639\u0627\u0647\u062f \u0631\u0626\u064a\u0633 \u0627\u0644\u0645\u062c\u0644\u0633 \u0628\u0627\u0644\u0645\u062d\u0627\u0641\u0638\u0647 \u0645\u062a\u0627\u0646\u0647 \u0627\u0644\u0639\u0644\u0627\u0642\u0647 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0648\u0645\u0648\u0627\u0635\u0644\u0647 \u0627\u0644\u062a\u0642\u062f\u0645 \u0648\u0627\u0644\u0639\u0645\u0644 \u0645\u0639\u0627 \u0648\u0627\u0648\u0636\u062d \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631 \u0627\u0628\u062f\u0649 \u062f\u0639\u0645\u0647 \u0644\u0644\u0645\u0646\u0647\u062c \u062a\u0646\u062a\u0647\u062c\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631 \u0648\u0639\u0645\u0644\u0647\u0627 \u0648\u064a\u0627\u062a\u064a \u0627\u062c\u062a\u0645\u0627\u0639 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0627\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0628\u0631\u0626\u064a\u0633 \u0627\u0644\u0645\u062c\u0644\u0633 \u0647\u0627\u0645\u0634 \u0632\u064a\u0627\u0631\u0647 \u0639\u0645\u0644 \u0627\u062f\u0627\u0647\u0627 \u0627\u0644\u064a\u0648\u0645 \u0627\u0644\u062e\u0645\u064a\u0633 \u062a\u0648\u0646\u0633 \u062a\u0631\u0627\u0633 \u062e\u0644\u0627\u0644\u0647\u0627 \u0627\u0634\u063a\u0627\u0644 \u0627\u0644\u062f\u0648\u0631\u0647 \u0627\u0644 \u0644\u0644\u062c\u0646\u0647 \u0627\u0644\u0645\u062e\u062a\u0644\u0637\u0647 \u0627\u0644\u0639\u0644\u064a\u0627 \u0627\u0644\u062a\u0648\u0646\u0633\u064a\u0647 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a\u0647 \u0631\u0641\u0642\u0647 \u0631\u0626\u064a\u0633 \u0627\u0644\u062d\u0643\u0648\u0645\u0647 \u064a\u0648\u0633\u0641 \u0627\u0644\u0634\u0627\u0647\u062f \u0648\u0627\u0644\u062a\u064a \u0627\u0646\u062a\u0647\u062a \u0628\u0627\u0644\u0645\u0635\u0627\u062f\u0642\u0647 \u0639\u062f\u064a\u062f \u0627\u0644\u0627\u062a\u0641\u0627\u0642\u064a\u0627\u062a \u062a\u0648\u0646\u0633 \u0648\u0627\u0644\u062c\u0632\u0627\u0626\u0631",
"target": "\n\u0642\u0627\u0644 \u0627\u0644\u0648\u0632\u064a\u0631 \u0627\u0644\u0623\u0648\u0644 \u0627\u0644\u062c\u0632\u0627\u0626\u0631\u064a \u0639\u0628\u062f \u0627\u0644\u0645\u0627\u0644\u0643 \u0633\u0644\u0627\u0644 \u0627\u062b\u0631 \u0644\u0642\u0627\u0621 \u062c\u0645\u0639\u0647 \u0628\u0631\u0626\u064a\u0633 \u0645\u062c\u0644\u0633 \u0646\u0648\u0627\u0628 \u0627\u0644\u0634\u0639\u0628 \u0645\u062d\u0645\u062f \u0627\u0644\u0646\u0627\u0635\u0631\u060c \u0625\u0646 \u0627\u0644\u0639\u0644\u0627\u0642\u0627\u062a \u0627\u0644\u062b\u0646\u0627\u0626\u064a\u0629 \u0628\u064a\u0646 \u0627\u0644\u0628\u0644\u062f\u064a\u0646 \u0645\u0645\u064a\u0632\u0629 \u0648\u0633\u062a\u0643\u0648\u0646 \u0623\u062d\u0633\u0646 \u062e\u0644\u0627\u0644 \u0627\u0644\u0641\u062a\u0631\u0629 \u0627\u0644\u0642\u0627\u062f\u0645\u0629."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5102 |
| valid | 1276 |
|
pdearena/ShallowWater-2D | ---
license: mit
---
|
elgui/tibrazie | ---
license: apache-2.0
---
|
AFFFPupu/Maths_competition_questions | ---
license: unknown
---
|
huggingface-projects/bot-fight-data | ---
license: mit
---
|
alvations/c4p0-v2-en-ja | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
- name: dataset
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: train
num_bytes: 22109670
num_examples: 17956
download_size: 8614674
dataset_size: 22109670
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aish31/pop_genre5 | ---
license: openrail
---
|
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-bcce97-62650145463 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: morenolq/bart-base-xsum
metrics: ['bertscore']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: morenolq/bart-base-xsum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Raffix](https://huggingface.co/Raffix) for evaluating this model. |
hjl/ultrafeedback_sft_losing | ---
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: train_sft
path: data/train_sft-*
- split: test_prefs
path: data/test_prefs-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 158444052
num_examples: 61135
- name: train_sft
num_bytes: 158444052
num_examples: 61135
- name: test_prefs
num_bytes: 5060059
num_examples: 2000
- name: test_sft
num_bytes: 2588097
num_examples: 1000
- name: train_gen
num_bytes: 158444052
num_examples: 61135
- name: test_gen
num_bytes: 2588097
num_examples: 1000
download_size: 278650781
dataset_size: 485568409
---
# Dataset Card for "ultrafeedback_sft_losing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
papasega/speechocean762_fluency | ---
dataset_info:
features:
- name: fluency
dtype: int64
- name: text
dtype: string
- name: speaker
dtype: string
- name: audio
dtype: audio
- name: label_fluency
dtype: string
- name: audio_duration
dtype: float64
- name: speech_rate
dtype: float64
- name: 1gram_repeat
dtype: int64
- name: 2gram_repeat
dtype: int64
- name: 3gram_repeat
dtype: int64
- name: 4gram_repeat
dtype: int64
- name: 5gram_repeat
dtype: int64
splits:
- name: train
num_bytes: 331754658.5
num_examples: 2500
- name: test
num_bytes: 310460448.5
num_examples: 2500
download_size: 611365572
dataset_size: 642215107.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Xhaheen/Alpaca_urdu__2024_ | ---
dataset_info:
features:
- name: urdu_instruction
dtype: string
- name: urdu_input
dtype: string
- name: urdu_output
dtype: string
- name: prompt
dtype: string
- name: input_ids
sequence: int64
- name: attention_mask
sequence: int64
splits:
- name: train
num_bytes: 61452899
num_examples: 5782
download_size: 12715387
dataset_size: 61452899
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/mysterious_idol_x_alter_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mysterious_idol_x_alter/謎のアイドルX〔オルタ〕/谜之偶像X〔Alter〕 (Fate/Grand Order)
This is the dataset of mysterious_idol_x_alter/謎のアイドルX〔オルタ〕/谜之偶像X〔Alter〕 (Fate/Grand Order), containing 500 images and their tags.
The core tags of this character are `yellow_eyes, blonde_hair, ahoge, glasses, braid, hair_between_eyes, semi-rimless_eyewear, black-framed_eyewear, under-rim_eyewear, sidelocks, french_braid, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 685.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mysterious_idol_x_alter_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 617.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mysterious_idol_x_alter_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1240 | 1.16 GiB | [Download](https://huggingface.co/datasets/CyberHarem/mysterious_idol_x_alter_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mysterious_idol_x_alter_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | 1girl, looking_at_viewer, plaid_scarf, red_scarf, solo, blue_skirt, jacket, pleated_skirt, serafuku, garter_straps, long_sleeves, black_thighhighs, duffel_coat, blue_shirt, open_coat, red_neckerchief, white_background, hood, simple_background, blush |
| 1 | 12 |  |  |  |  |  | 1girl, black_thighhighs, blue_skirt, excalibur_(fate/stay_night), holding_sword, jacket, looking_at_viewer, open_clothes, plaid_scarf, pleated_skirt, red_scarf, serafuku, solo, duffel_coat, garter_straps, covered_mouth, long_sleeves, blue_shirt, hair_ribbon, white_background, boots, fringe_trim, red_neckerchief, simple_background |
| 2 | 8 |  |  |  |  |  | 1girl, coat, hood, jacket, plaid_scarf, red_scarf, solo, long_sleeves, looking_at_viewer, upper_body, valentine, hair_bun, holding_gift, gift_box, simple_background, black_ribbon, blue_skirt, blush, candy, chocolate, hair_ribbon, open_clothes, school_uniform |
| 3 | 14 |  |  |  |  |  | 1girl, looking_at_viewer, plaid_scarf, red_scarf, solo, upper_body, coat, jacket, long_sleeves, closed_mouth, simple_background, blush, white_background, hair_ribbon, smile |
| 4 | 6 |  |  |  |  |  | 1girl, armor, holding_sword, looking_at_viewer, solo, black_thighhighs, coat, hood_up, jacket, open_clothes, black_leotard, garter_straps, black_gloves, energy_sword |
| 5 | 9 |  |  |  |  |  | 1girl, gloves, holding_sword, looking_at_viewer, solo, breastplate, hood_up, black_thighhighs, jacket, dual_wielding, lightsaber |
| 6 | 34 |  |  |  |  |  | 1girl, looking_at_viewer, solo, black_shorts, bike_shorts, white_shirt, gym_uniform, blush, black_thighhighs, long_sleeves, name_tag, choker, medium_breasts, black_jacket, simple_background, thighs, hair_ribbon, hood, open_jacket |
| 7 | 6 |  |  |  |  |  | 1boy, 1girl, bike_shorts, blush, hetero, jacket, solo_focus, black_shorts, indoors, medium_breasts, nipples, penis, vaginal, clothed_sex, girl_on_top, looking_at_viewer, open_clothes, open_mouth, straddling, thighhighs, ass, cum_in_pussy, hood, looking_back, sex_from_behind |
| 8 | 5 |  |  |  |  |  | 1girl, blue_sky, cloud, day, looking_at_viewer, ocean, outdoors, bare_shoulders, black_one-piece_swimsuit, medium_breasts, solo, water, jacket, wading, beachball, black_ribbon, blush, cleavage, closed_mouth, collarbone, covered_navel, dutch_angle, food, hair_ribbon, off_shoulder, parted_lips, school_swimsuit, short_hair, standing, tree |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | plaid_scarf | red_scarf | solo | blue_skirt | jacket | pleated_skirt | serafuku | garter_straps | long_sleeves | black_thighhighs | duffel_coat | blue_shirt | open_coat | red_neckerchief | white_background | hood | simple_background | blush | excalibur_(fate/stay_night) | holding_sword | open_clothes | covered_mouth | hair_ribbon | boots | fringe_trim | coat | upper_body | valentine | hair_bun | holding_gift | gift_box | black_ribbon | candy | chocolate | school_uniform | closed_mouth | smile | armor | hood_up | black_leotard | black_gloves | energy_sword | gloves | breastplate | dual_wielding | lightsaber | black_shorts | bike_shorts | white_shirt | gym_uniform | name_tag | choker | medium_breasts | black_jacket | thighs | open_jacket | 1boy | hetero | solo_focus | indoors | nipples | penis | vaginal | clothed_sex | girl_on_top | open_mouth | straddling | thighhighs | ass | cum_in_pussy | looking_back | sex_from_behind | blue_sky | cloud | day | ocean | outdoors | bare_shoulders | black_one-piece_swimsuit | water | wading | beachball | cleavage | collarbone | covered_navel | dutch_angle | food | off_shoulder | parted_lips | school_swimsuit | short_hair | standing | tree |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------------|:------------|:-------|:-------------|:---------|:----------------|:-----------|:----------------|:---------------|:-------------------|:--------------|:-------------|:------------|:------------------|:-------------------|:-------|:--------------------|:--------|:------------------------------|:----------------|:---------------|:----------------|:--------------|:--------|:--------------|:-------|:-------------|:------------|:-----------|:---------------|:-----------|:---------------|:--------|:------------|:-----------------|:---------------|:--------|:--------|:----------|:----------------|:---------------|:---------------|:---------|:--------------|:----------------|:-------------|:---------------|:--------------|:--------------|:--------------|:-----------|:---------|:-----------------|:---------------|:---------|:--------------|:-------|:---------|:-------------|:----------|:----------|:--------|:----------|:--------------|:--------------|:-------------|:-------------|:-------------|:------|:---------------|:---------------|:------------------|:-----------|:--------|:------|:--------|:-----------|:-----------------|:---------------------------|:--------|:---------|:------------|:-----------|:-------------|:----------------|:--------------|:-------|:---------------|:--------------|:------------------|:-------------|:-----------|:-------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | X | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | | | | X | | | | | | | X | X | X | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 14 |  |  |  |  |  | X | X | X | X | X | | X | | | | X | | | | | | X | | X | X | | | | | X | | | X | X | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | X | | | X | | X | | | X | | X | | | | | | | | | | X | X | | | | | X | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 9 |  |  |  |  |  | X | X | | | X | | X | | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 34 |  |  |  |  |  | X | X | | | X | | | | | | X | X | | | | | | X | X | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | X | | | | | X | | | | | | | | | | | X | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | X | | | X | | X | | | | | | | | | | | | | X | | | | | X | | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
bunkalab/medium-sample-technology-tags | ---
dataset_info:
features:
- name: title
dtype: string
- name: tags
dtype: string
- name: doc_id
dtype: int64
splits:
- name: train
num_bytes: 113529
num_examples: 1394
download_size: 68736
dataset_size: 113529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dantenho/Teste2 | ---
license: apache-2.0
---
|
koochikoo25/Pashto-Concatenated | ---
license: cc-by-nd-4.0
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 2249963091.9404793
num_examples: 3548
- name: validation
num_bytes: 317718223.72
num_examples: 501
- name: test
num_bytes: 79778102.95952095
num_examples: 126
download_size: 2609724072
dataset_size: 2647459418.6200004
---
|
Jasteg19/Ocra_Sample_Dataset | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 4908662.853988133
num_examples: 2878
download_size: 3603125
dataset_size: 4908662.853988133
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mitsuki-Sakamoto/alpaca_farm-deberta-re-pref-64-fil_self_160m_bo16_2_mix_50_kl_0.1_prm_70m_thr_0.0_seed_1_tp_0.9 | ---
dataset_info:
config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
- name: index
dtype: int64
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: filtered_epoch
dtype: int64
- name: gen_reward
dtype: float64
- name: gen_response
dtype: string
splits:
- name: epoch_0
num_bytes: 43685579
num_examples: 18928
- name: epoch_1
num_bytes: 44253092
num_examples: 18928
- name: epoch_2
num_bytes: 44323759
num_examples: 18928
- name: epoch_3
num_bytes: 44360727
num_examples: 18928
- name: epoch_4
num_bytes: 44375203
num_examples: 18928
- name: epoch_5
num_bytes: 44378240
num_examples: 18928
- name: epoch_6
num_bytes: 44367927
num_examples: 18928
- name: epoch_7
num_bytes: 44361513
num_examples: 18928
- name: epoch_8
num_bytes: 44358337
num_examples: 18928
- name: epoch_9
num_bytes: 44355181
num_examples: 18928
- name: epoch_10
num_bytes: 44353762
num_examples: 18928
- name: epoch_11
num_bytes: 44352125
num_examples: 18928
- name: epoch_12
num_bytes: 44351546
num_examples: 18928
- name: epoch_13
num_bytes: 44351968
num_examples: 18928
- name: epoch_14
num_bytes: 44351214
num_examples: 18928
- name: epoch_15
num_bytes: 44353782
num_examples: 18928
- name: epoch_16
num_bytes: 44352144
num_examples: 18928
- name: epoch_17
num_bytes: 44353052
num_examples: 18928
- name: epoch_18
num_bytes: 44353100
num_examples: 18928
- name: epoch_19
num_bytes: 44352312
num_examples: 18928
- name: epoch_20
num_bytes: 44353256
num_examples: 18928
- name: epoch_21
num_bytes: 44353847
num_examples: 18928
- name: epoch_22
num_bytes: 44352860
num_examples: 18928
- name: epoch_23
num_bytes: 44351212
num_examples: 18928
- name: epoch_24
num_bytes: 44352677
num_examples: 18928
- name: epoch_25
num_bytes: 44352848
num_examples: 18928
- name: epoch_26
num_bytes: 44352811
num_examples: 18928
- name: epoch_27
num_bytes: 44352019
num_examples: 18928
- name: epoch_28
num_bytes: 44352502
num_examples: 18928
- name: epoch_29
num_bytes: 44353403
num_examples: 18928
download_size: 695990389
dataset_size: 1329871998
configs:
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: epoch_0
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_0-*
- split: epoch_1
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_1-*
- split: epoch_2
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_2-*
- split: epoch_3
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_3-*
- split: epoch_4
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_4-*
- split: epoch_5
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_5-*
- split: epoch_6
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_6-*
- split: epoch_7
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_7-*
- split: epoch_8
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_8-*
- split: epoch_9
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_9-*
- split: epoch_10
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_10-*
- split: epoch_11
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_11-*
- split: epoch_12
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_12-*
- split: epoch_13
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_13-*
- split: epoch_14
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_14-*
- split: epoch_15
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_15-*
- split: epoch_16
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_16-*
- split: epoch_17
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_17-*
- split: epoch_18
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_18-*
- split: epoch_19
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_19-*
- split: epoch_20
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_20-*
- split: epoch_21
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_21-*
- split: epoch_22
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_22-*
- split: epoch_23
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_23-*
- split: epoch_24
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_24-*
- split: epoch_25
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_25-*
- split: epoch_26
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_26-*
- split: epoch_27
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_27-*
- split: epoch_28
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_28-*
- split: epoch_29
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/epoch_29-*
---
|
manishiitg/aditi-syn-v1 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 113147537
num_examples: 25000
download_size: 36779856
dataset_size: 113147537
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
language:
- hi
- en
---
v1 for synthetic dataset generate for aditi model.
Generation scripts are located here https://github.com/manishiitg/aditi_dataset/tree/main/gen
|
Kaue123456/JonathanNeves | ---
license: openrail
---
|
maghwa/OpenHermes-2-AR-10K-12 | ---
dataset_info:
features:
- name: model
dtype: 'null'
- name: model_name
dtype: 'null'
- name: skip_prompt_formatting
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: title
dtype: 'null'
- name: hash
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: category
dtype: 'null'
- name: topic
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: idx
dtype: 'null'
- name: conversations
dtype: string
- name: language
dtype: 'null'
- name: id
dtype: 'null'
- name: views
dtype: float64
- name: source
dtype: string
splits:
- name: train
num_bytes: 30330813
num_examples: 10001
download_size: 14010383
dataset_size: 30330813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jorgejgnz/simple-fluid-simulations | ---
license: cc-by-4.0
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 8201587
num_examples: 779
download_size: 7794763
dataset_size: 8201587
tags:
- physics
- simulation
- fluids
- video
- gif
size_categories:
- n<1K
--- |
gokuls/glue_augmented_sst2 | ---
license: apache-2.0
---
# Dataset Card for glue_augmented_sst2
## Dataset Description
Augmented SST-2 dataset
**Reference:** https://huggingface.co/datasets/glue |
blanchon/EuroSAT_RGB | ---
language: en
license: unknown
size_categories:
- 10K<n<100K
task_categories:
- image-classification
paperswithcode_id: eurosat
pretty_name: EuroSAT RGB
tags:
- remote-sensing
- earth-observation
- geospatial
- satellite-imagery
- land-cover-classification
- sentinel-2
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Annual Crop
'1': Forest
'2': Herbaceous Vegetation
'3': Highway
'4': Industrial Buildings
'5': Pasture
'6': Permanent Crop
'7': Residential Buildings
'8': River
'9': SeaLake
- name: filename
dtype: string
splits:
- name: train
num_bytes: 104485303.0
num_examples: 16200
- name: test
num_bytes: 34726245.0
num_examples: 5400
- name: validation
num_bytes: 34781690.0
num_examples: 5400
download_size: 174279561
dataset_size: 173993238.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# EuroSAT RGB
<!-- Dataset thumbnail -->

<!-- Provide a quick summary of the dataset. -->
EUROSAT RGB is the RGB version of the EUROSAT dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples.
- **Paper:** https://arxiv.org/abs/1709.00029
- **Homepage:** https://github.com/phelber/EuroSAT
## Description
<!-- Provide a longer summary of what this dataset is. -->
The EuroSAT dataset is a comprehensive land cover classification dataset that focuses on images taken by the [ESA Sentinel-2 satellite](https://sentinel.esa.int/web/sentinel/missions/sentinel-2). It contains a total of 27,000 images, each with a resolution of 64x64 pixels. These images cover 10 distinct land cover classes and are collected from over 34 European countries.
The dataset is available in two versions: **RGB only** (this repo) and all 13 [Multispectral (MS) Sentinel-2 bands](https://sentinels.copernicus.eu/web/sentinel/user-guides/sentinel-2-msi/resolutions/spatial). EuroSAT is considered a relatively easy dataset, with approximately 98.6% accuracy achievable using a ResNet-50 architecture.
- **Total Number of Images**: 27000
- **Bands**: 3 (RGB)
- **Image Resolution**: 64x64m
- **Land Cover Classes**: 10
- Classes: Annual Crop, Forest, Herbaceous Vegetation, Highway, Industrial Buildings, Pasture, Permanent Crop, Residential Buildings, River, SeaLake
## Usage
To use this dataset, simply use `datasets.load_dataset("blanchon/EuroSAT_RGB")`.
<!-- Provide any additional information on how to use this dataset. -->
```python
from datasets import load_dataset
EuroSAT_RGB = load_dataset("blanchon/EuroSAT_RGB")
```
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you use the EuroSAT dataset in your research, please consider citing the following publication:
```bibtex
@article{helber2017eurosat,
title={EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification},
author={Helber, et al.},
journal={ArXiv preprint arXiv:1709.00029},
year={2017}
}
```
|
napatswift/thaigov-radio-audio | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 828772851.0
num_examples: 426
download_size: 824527615
dataset_size: 828772851.0
---
# Dataset Card for "thaigov-radio-audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sileod/attempto-nli | ---
license: apache-2.0
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
Natural language inference using attempto controlled english
Paper to come
```
@inproceedings{fuchs2012first,
title={First-order reasoning for attempto controlled english},
author={Fuchs, Norbert E},
booktitle={Controlled Natural Language: Second International Workshop, CNL 2010, Marettimo Island, Italy, September 13-15, 2010. Revised Papers 2},
pages={73--94},
year={2012},
organization={Springer}
}
``` |
yzhuang/autotree_automl_jannis_sgosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 656240000
num_examples: 10000
- name: validation
num_bytes: 656240000
num_examples: 10000
download_size: 1192655830
dataset_size: 1312480000
---
# Dataset Card for "autotree_automl_jannis_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nerdiin/oliveiracker | ---
license: openrail
---
|
distilled-from-one-sec-cv12/chunk_40 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1292186280
num_examples: 251790
download_size: 1312964172
dataset_size: 1292186280
---
# Dataset Card for "chunk_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
senhorsapo/gizmoduck | ---
license: openrail
---
|
Aditya2034/Wikipedia | ---
license: apache-2.0
---
|
MetroCat/milunim_zaahalim | ---
license: afl-3.0
---
|
alisson40889/moreira | ---
license: openrail
---
|
heliosprime/twitter_dataset_1713202977 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 28896
num_examples: 78
download_size: 23917
dataset_size: 28896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713202977"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 154520000
num_examples: 10000
- name: validation
num_bytes: 154520000
num_examples: 10000
download_size: 50636856
dataset_size: 309040000
---
# Dataset Card for "autotree_pmlb_10000_banana_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mick0615/data | ---
license: openrail
---
|
NbAiLab/mnli-norwegian | ---
annotations_creators:
- expert-generated
language:
- 'no'
- 'nob'
- 'en'
language_creators:
- machine-generated
- expert-generated
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: MNLI Norwegian
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- norwegian
- simcse
- mnli
- nli
- sentence
task_categories:
- sentence-similarity
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-classification
---
# MNLI Norwegian
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that it covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalisation evaluation. There is also a [HuggingFace version](https://huggingface.co/datasets/multi_nli) of the dataset available.
This dataset is machine translated using Google Translate. From this translation different version of the dataset where created. Included in the repo is a version that is specifically suited for training sentence-BERT-models. This version include the triplet: base-entailment-contradiction. It also includes a version that mixes English and Norwegian, as well as both csv and json-verions. The script for generating the datasets are included in this repo.
Please note that there is no test dataset for MNLI, since this is closed. The authors of MNLI informs us that they selected 7500 new contexts in the same way as the original MNLI contexts. That means the English part of the XNLI test sets is highly comparable. For each genre, the text is generally in-domain with the original MNLI test set (it's from the same source and selected by me in the same way). In most cases the XNLI test set can therefore be used.
### The following datasets are available in the repo:
* mnli_no_en_for_simcse.csv
* mnli_no_en_small_for_simcse.csv
* mnli_no_for_simcse.csv
* multinli_1.0_dev_matched_no_mt.jsonl
* multinli_1.0_dev_mismatched_no_mt.jsonl
* multinli_1.0_train_no_mt.jsonl
* nli_for_simcse.csv
* xnli_dev_no_mt.jsonl
* xnli_test_no_mt.jsonl
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere). The translation and compilation of the Norwegian part is released under the Creative Commons Attribution 3.0 Unported Licenses.
### Citation Information
The datasets are compiled and machine translated by the AiLab at the Norwegian National Library. However, the vast majority of the work related to this dataset is compiling the English version. We therefore suggest that you also cite the original work:
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
|
SEACrowd/covost2 | ---
tags:
- speech-to-text-translation
- machine-translation
language:
- ind
- eng
---
# covost2
CoVoST2 is a large-scale multilingual speech translation corpus covering translations from 21 languages to English
and from English into 15 languages. The dataset is created using Mozilla's open-source Common Voice database of
crowdsourced voice recordings. There are 2,900 hours of speech represented in the corpus.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@article{wang2020covost,
title={Covost 2 and massively multilingual speech-to-text translation},
author={Wang, Changhan and Wu, Anne and Pino, Juan},
journal={arXiv preprint arXiv:2007.10310},
year={2020}
}
@inproceedings{wang21s_interspeech,
author={Wang, Changhan and Wu, Anne and Pino, Juan},
title={{CoVoST 2 and Massively Multilingual Speech Translation}},
year=2021,
booktitle={Proc. Interspeech 2021},
pages={2247--2251},
url={https://www.isca-speech.org/archive/interspeech_2021/wang21s_interspeech}
doi={10.21437/Interspeech.2021-2027}
}
```
## License
CC BY-NC 4.0
## Homepage
[https://huggingface.co/datasets/covost2](https://huggingface.co/datasets/covost2)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
detectors/isun-ood | ---
license: unknown
size_categories: 1K<n<10K
task_categories:
- image-classification
paperswithcode_id: isun
pretty_name: iSUN
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 24514257.375
num_examples: 8925
download_size: 0
dataset_size: 24514257.375
---
# Dataset Card for iSUN for OOD Detection
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Original Dataset Authors**: Junting Pan, Xavier Giró-i-Nieto
- **OOD Split Authors:** Shiyu Liang, Yixuan Li, R. Srikant
- **Shared by:** Eduardo Dadalto
- **License:** unknown
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Original Dataset Paper:** http://arxiv.org/abs/1507.01422v1
- **First OOD Application Paper:** http://arxiv.org/abs/1706.02690v5
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset is intended to be used as an ouf-of-distribution dataset for image classification benchmarks.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset is not annotated.
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The goal in curating and sharing this dataset to the HuggingFace Hub is to accelerate research and promote reproducibility in generalized Out-of-Distribution (OOD) detection.
Check the python library [detectors](https://github.com/edadaltocg/detectors) if you are interested in OOD detection.
### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Please check original paper for details on the dataset.
### Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Please check original paper for details on the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@software{detectors2023,
author = {Eduardo Dadalto},
title = {Detectors: a Python Library for Generalized Out-Of-Distribution Detection},
url = {https://github.com/edadaltocg/detectors},
doi = {https://doi.org/10.5281/zenodo.7883596},
month = {5},
year = {2023}
}
@article{1706.02690v5,
author = {Shiyu Liang and Yixuan Li and R. Srikant},
title = {Enhancing The Reliability of Out-of-distribution Image Detection in
Neural Networks},
year = {2017},
month = {6},
note = {ICLR 2018},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1706.02690v5}
}
@article{1507.01422v1,
author = {Junting Pan and Xavier Giró-i-Nieto},
title = {End-to-end Convolutional Network for Saliency Prediction},
year = {2015},
month = {7},
note = {Winner of the saliency prediction challenge in the Large-scale Scene
Understanding (LSUN) Challenge in the associated workshop of the IEEE
Conference on Computer Vision and Pattern Recognition (CVPR) 2015},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1507.01422v1}
}
```
## Dataset Card Authors
Eduardo Dadalto
## Dataset Card Contact
https://huggingface.co/edadaltocg |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.