id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
autoevaluate/autoeval-eval-acronym_identification-default-b19dd3-67560145602 | 2023-10-04T17:23:42.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-wmt14-de-en-fbedb0-67643145603 | 2023-10-04T17:24:09.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-wmt14-de-en-fbedb0-67643145604 | 2023-10-04T17:35:06.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt14
eval_info:
task: translation
model: leukas/byt5-large-wmt14-deen
metrics: ['bleu']
dataset_name: wmt14
dataset_config: de-en
dataset_split: test
col_mapping:
source: translation.de
target: translation.en
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: leukas/byt5-large-wmt14-deen
* Dataset: wmt14
* Config: de-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@seeed](https://huggingface.co/seeed) for evaluating this model. |
autoevaluate/autoeval-eval-wmt14-de-en-fbedb0-67643145605 | 2023-10-04T17:31:34.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- wmt14
eval_info:
task: translation
model: leukas/mt5-large-wmt14-deen
metrics: ['bleu']
dataset_name: wmt14
dataset_config: de-en
dataset_split: test
col_mapping:
source: translation.de
target: translation.en
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: leukas/mt5-large-wmt14-deen
* Dataset: wmt14
* Config: de-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@seeed](https://huggingface.co/seeed) for evaluating this model. |
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cd62e4-67882145606 | 2023-10-04T17:26:08.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: Akihiro2/bert-finetuned-squad
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Akihiro2/bert-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@zhouzj](https://huggingface.co/zhouzj) for evaluating this model. |
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cd62e4-67882145607 | 2023-10-04T17:26:15.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: Asmit/bert-finetuned-squad
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Asmit/bert-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@zhouzj](https://huggingface.co/zhouzj) for evaluating this model. |
autoevaluate/autoeval-eval-acronym_identification-default-58eb27-68148145611 | 2023-10-04T17:25:59.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-glue-cola-508e4a-68175145612 | 2023-10-04T17:26:37.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-460442-68463145616 | 2023-10-04T17:26:40.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-4e5d8b-68492145617 | 2023-10-04T17:26:45.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-20be5f-68181145613 | 2023-10-04T17:26:45.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-9e8f9b-68269145614 | 2023-10-04T17:26:48.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-2b4455-68323145615 | 2023-10-04T17:27:19.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-31f5c5-68516145618 | 2023-10-04T17:27:22.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-go_emotions-raw-c4cfa5-68606145620 | 2023-10-04T17:27:26.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-go_emotions-raw-c4cfa5-68606145621 | 2023-10-04T17:27:27.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-0f2636-68520145619 | 2023-10-04T17:27:26.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-conll2003-conll2003-e68bb2-67908145608 | 2023-10-04T17:34:15.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: 51la5/bert-large-NER
metrics: ['bertscore']
dataset_name: conll2003
dataset_config: conll2003
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: 51la5/bert-large-NER
* Dataset: conll2003
* Config: conll2003
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@theoraclephd](https://huggingface.co/theoraclephd) for evaluating this model. |
autoevaluate/autoeval-eval-acronym_identification-default-0d74fb-68661145622 | 2023-10-04T17:28:00.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-5a5ab9-68735145623 | 2023-10-04T17:28:06.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-1982e3-68759145624 | 2023-10-04T17:28:17.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-b10dc6-68760145625 | 2023-10-04T17:28:21.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad_v2-squad_v2-bc82cf-68805145626 | 2023-10-04T17:28:37.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-0f8f5a-68821145627 | 2023-10-04T17:28:45.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-samsum-samsum-e12e62-68887145628 | 2023-10-04T17:28:56.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-samsum-samsum-e12e62-68887145629 | 2023-10-04T17:36:22.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: d0rj/rut5-base-summ
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: d0rj/rut5-base-summ
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@d0rj](https://huggingface.co/d0rj) for evaluating this model. |
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-7bebf7-68056145609 | 2023-10-04T17:29:24.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-medical_questions_pairs-default-d0c070-68078145610 | 2023-10-04T17:31:43.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- medical_questions_pairs
eval_info:
task: summarization
model: ARTeLab/it5-summarization-ilpost
metrics: []
dataset_name: medical_questions_pairs
dataset_config: default
dataset_split: train
col_mapping:
text: question_1
target: question_2
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: ARTeLab/it5-summarization-ilpost
* Dataset: medical_questions_pairs
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@halmj](https://huggingface.co/halmj) for evaluating this model. |
autoevaluate/autoeval-eval-xsum-default-199117-68890145630 | 2023-10-04T19:46:09.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: d0rj/rut5-base-summ
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: d0rj/rut5-base-summ
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@d0rj](https://huggingface.co/d0rj) for evaluating this model. |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-657a6e-69032145637 | 2023-10-04T17:30:39.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-ag_news-default-d8388c-69061145638 | 2023-10-04T17:30:51.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-nyt-4ac5f8-69195145640 | 2023-10-04T17:31:25.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-d4c5a8-69063145639 | 2023-10-04T17:31:30.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-6b3ddf-68921145634 | 2023-10-04T17:31:39.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-nyt-4ac5f8-69195145641 | 2023-10-04T17:32:09.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-ab931e-68894145632 | 2023-10-04T17:32:18.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-reddit-75a166-69197145645 | 2023-10-04T17:32:19.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-nyt-98ac89-69196145643 | 2023-10-04T17:32:53.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-xsum-default-199117-68890145631 | 2023-10-04T17:33:15.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-reddit-75a166-69197145644 | 2023-10-04T17:34:06.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-nyt-98ac89-69196145642 | 2023-10-04T17:34:53.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-666611-69327145648 | 2023-10-04T17:35:39.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b528f3-69445145652 | 2023-10-04T17:35:57.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-597533-69343145650 | 2023-10-04T17:36:17.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-b738a0-68973145635 | 2023-10-04T17:36:35.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b528f3-69445145654 | 2023-10-04T17:36:58.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b528f3-69445145655 | 2023-10-04T17:37:09.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-3b42eb-69446145656 | 2023-10-04T17:37:14.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-eb9367-69467145657 | 2023-10-04T17:37:16.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
erhwenkuo/clean_passages_80m-chinese-zhtw | 2023-10-04T21:53:04.000Z | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:zh",
"region:us"
] | erhwenkuo | null | null | null | 0 | 0 | ---
dataset_info:
features:
- name: passage
dtype: string
splits:
- name: train
num_bytes: 18996999214
num_examples: 88328203
download_size: 13088559046
dataset_size: 18996999214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- zh
size_categories:
- 10M<n<100M
---
# Dataset Card for "clean_passages_80m-chinese-zhtw"
包含**8千萬餘萬**(88328203)個中文段落,不包含任何字母、數字。文字長度大部分介於 50\~200 個字。
原始資料集是用於訓練[GENIUS模型中文版](https://huggingface.co/spaces/beyond/genius)。論文參考引用:
```
@article{guo2022genius,
title={GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation},
author={Guo, Biyang and Gong, Yeyun and Shen, Yelong and Han, Songqiao and Huang, Hailiang and Duan, Nan and Chen, Weizhu},
journal={arXiv preprint arXiv:2211.10330},
year={2022}
}
```
## 資料集來源
本資料集是基於[CLUE中文預訓練語料集](https://github.com/CLUEbenchmark/CLUE)進行處理、過濾并進行簡繁轉諲而得到的。
原始資料集引用:
```
@misc{bright_xu_2019_3402023,
author = {Bright Xu},
title = {NLP Chinese Corpus: Large Scale Chinese Corpus for NLP },
month = sep,
year = 2019,
doi = {10.5281/zenodo.3402023},
version = {1.0},
publisher = {Zenodo},
url = {https://doi.org/10.5281/zenodo.3402023}
}
```
|
autoevaluate/autoeval-eval-acronym_identification-default-d56b8a-69576145658 | 2023-10-04T17:37:53.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-imdb-plain_text-8a4130-69634145659 | 2023-10-04T17:37:56.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-fc8438-69328145649 | 2023-10-04T17:38:08.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-94a268-69686145661 | 2023-10-04T17:38:30.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-185a2c-69702145662 | 2023-10-04T17:38:33.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-d32055-69703145663 | 2023-10-04T17:38:35.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-cfaff9-69883145664 | 2023-10-04T17:38:45.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-imdb-plain_text-8a4130-69634145660 | 2023-10-04T17:38:51.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad_v2-squad_v2-c9866e-69898145665 | 2023-10-04T17:39:08.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-0750f3-69912145666 | 2023-10-04T17:39:16.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-5c6ce1-69913145667 | 2023-10-04T17:39:18.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-5c6ce1-69913145668 | 2023-10-04T17:39:30.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-reddit-e63d81-69198145646 | 2023-10-04T17:39:31.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-5c6ce1-69913145669 | 2023-10-04T17:39:53.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-247a8b-69915145670 | 2023-10-04T17:39:55.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-247a8b-69915145671 | 2023-10-04T17:40:05.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-tner__bc5cdr-bc5cdr-247a8b-69915145672 | 2023-10-04T17:40:21.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-yelp_review_full-yelp_review_full-f52961-69920145673 | 2023-10-04T17:40:31.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-yelp_polarity-plain_text-c254cd-69921145674 | 2023-10-04T17:40:41.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-7eb114-70037145675 | 2023-10-04T17:40:51.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-25bbbc-70173145676 | 2023-10-04T17:40:59.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-1683b2-69031145636 | 2023-10-04T17:41:05.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b528f3-69445145651 | 2023-10-04T17:41:22.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-733f7f-70174145677 | 2023-10-04T17:41:24.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-c4974c-70206145678 | 2023-10-04T17:41:30.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-0b96f1-70211145679 | 2023-10-04T17:41:32.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squad-plain_text-b528f3-69445145653 | 2023-10-04T17:41:43.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-squadshifts-reddit-e63d81-69198145647 | 2023-10-04T17:41:53.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-12c3e0-70217145680 | 2023-10-04T17:42:02.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-acronym_identification-default-f20b7a-70249145681 | 2023-10-04T17:42:06.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-ejschwartz__oo-method-test-bylibrary-test-ejschwartz__o-703054-70259145682 | 2023-10-04T17:42:10.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-322a2a-70310145683 | 2023-10-04T17:42:17.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-e891a6-70311145684 | 2023-10-04T17:42:26.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-amazon_polarity-amazon_polarity-976ed0-70312145685 | 2023-10-04T17:42:41.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-yelp_polarity-plain_text-2ac130-70653145686 | 2023-10-04T17:42:48.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-yelp_polarity-plain_text-5c1435-70656145687 | 2023-10-04T17:42:58.000Z | [
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | Entry not found |
autoevaluate/autoeval-eval-OxAISH-AL-LLM__wiki_toxic-default-8c726c-70747145688 | 2023-10-04T18:17:23.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | null | 0 | 0 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- OxAISH-AL-LLM/wiki_toxic
eval_info:
task: summarization
model: MurkatG/bart-reviews
metrics: ['precision']
dataset_name: OxAISH-AL-LLM/wiki_toxic
dataset_config: default
dataset_split: test
col_mapping:
text: comment_text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: MurkatG/bart-reviews
* Dataset: OxAISH-AL-LLM/wiki_toxic
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Krzys](https://huggingface.co/Krzys) for evaluating this model. |
open-llm-leaderboard/details_beomi__KoRWKV-6B | 2023-10-04T17:44:16.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | null | 0 | 0 | ---
pretty_name: Evaluation run of beomi/KoRWKV-6B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beomi__KoRWKV-6B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-04T17:42:58.699001](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoRWKV-6B/blob/main/results_2023-10-04T17-42-58.699001.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24676461098562108,\n\
\ \"acc_stderr\": 0.03124881475395607,\n \"acc_norm\": 0.2477296300407201,\n\
\ \"acc_norm_stderr\": 0.03126094663304752,\n \"mc1\": 0.19951040391676866,\n\
\ \"mc1_stderr\": 0.013989929967559647,\n \"mc2\": 0.3904917327630846,\n\
\ \"mc2_stderr\": 0.014874434046360765\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.19283276450511946,\n \"acc_stderr\": 0.01152905546566333,\n\
\ \"acc_norm\": 0.22098976109215018,\n \"acc_norm_stderr\": 0.012124929206818258\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2930691097390958,\n\
\ \"acc_stderr\": 0.0045423962699992155,\n \"acc_norm\": 0.3218482374029078,\n\
\ \"acc_norm_stderr\": 0.004662303395239619\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.32592592592592595,\n\
\ \"acc_stderr\": 0.040491220417025055,\n \"acc_norm\": 0.32592592592592595,\n\
\ \"acc_norm_stderr\": 0.040491220417025055\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.2236842105263158,\n \"acc_stderr\": 0.03391160934343602,\n\
\ \"acc_norm\": 0.2236842105263158,\n \"acc_norm_stderr\": 0.03391160934343602\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.33,\n\
\ \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \
\ \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106734,\n\
\ \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106734\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.22916666666666666,\n\
\ \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.22916666666666666,\n\
\ \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n\
\ \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \
\ \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \
\ \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n\
\ \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n\
\ \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n\
\ \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.20425531914893616,\n \"acc_stderr\": 0.026355158413349428,\n\
\ \"acc_norm\": 0.20425531914893616,\n \"acc_norm_stderr\": 0.026355158413349428\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.22807017543859648,\n\
\ \"acc_stderr\": 0.03947152782669415,\n \"acc_norm\": 0.22807017543859648,\n\
\ \"acc_norm_stderr\": 0.03947152782669415\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n\
\ \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2222222222222222,\n \"acc_stderr\": 0.02141168439369419,\n \"\
acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.02141168439369419\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1984126984126984,\n\
\ \"acc_stderr\": 0.035670166752768635,\n \"acc_norm\": 0.1984126984126984,\n\
\ \"acc_norm_stderr\": 0.035670166752768635\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.267741935483871,\n \"acc_stderr\": 0.025189006660212374,\n \"\
acc_norm\": 0.267741935483871,\n \"acc_norm_stderr\": 0.025189006660212374\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"\
acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.16,\n \"acc_stderr\": 0.036845294917747094,\n \"acc_norm\"\
: 0.16,\n \"acc_norm_stderr\": 0.036845294917747094\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n\
\ \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.2727272727272727,\n \"acc_stderr\": 0.03173071239071724,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.03173071239071724\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.23834196891191708,\n \"acc_stderr\": 0.03074890536390991,\n\
\ \"acc_norm\": 0.23834196891191708,\n \"acc_norm_stderr\": 0.03074890536390991\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.2512820512820513,\n \"acc_stderr\": 0.021992016662370557,\n\
\ \"acc_norm\": 0.2512820512820513,\n \"acc_norm_stderr\": 0.021992016662370557\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.22962962962962963,\n \"acc_stderr\": 0.02564410863926763,\n \
\ \"acc_norm\": 0.22962962962962963,\n \"acc_norm_stderr\": 0.02564410863926763\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3067226890756303,\n \"acc_stderr\": 0.02995382389188704,\n \
\ \"acc_norm\": 0.3067226890756303,\n \"acc_norm_stderr\": 0.02995382389188704\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2251655629139073,\n \"acc_stderr\": 0.03410435282008936,\n \"\
acc_norm\": 0.2251655629139073,\n \"acc_norm_stderr\": 0.03410435282008936\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.23302752293577983,\n \"acc_stderr\": 0.018125669180861493,\n \"\
acc_norm\": 0.23302752293577983,\n \"acc_norm_stderr\": 0.018125669180861493\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.36574074074074076,\n \"acc_stderr\": 0.03284738857647207,\n \"\
acc_norm\": 0.36574074074074076,\n \"acc_norm_stderr\": 0.03284738857647207\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.27941176470588236,\n \"acc_stderr\": 0.031493281045079556,\n \"\
acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.031493281045079556\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.25738396624472576,\n \"acc_stderr\": 0.028458820991460295,\n \
\ \"acc_norm\": 0.25738396624472576,\n \"acc_norm_stderr\": 0.028458820991460295\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2062780269058296,\n\
\ \"acc_stderr\": 0.027157150479563824,\n \"acc_norm\": 0.2062780269058296,\n\
\ \"acc_norm_stderr\": 0.027157150479563824\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.2900763358778626,\n \"acc_stderr\": 0.03980066246467765,\n\
\ \"acc_norm\": 0.2900763358778626,\n \"acc_norm_stderr\": 0.03980066246467765\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2892561983471074,\n \"acc_stderr\": 0.04139112727635463,\n \"\
acc_norm\": 0.2892561983471074,\n \"acc_norm_stderr\": 0.04139112727635463\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n\
\ \"acc_stderr\": 0.042365112580946315,\n \"acc_norm\": 0.25925925925925924,\n\
\ \"acc_norm_stderr\": 0.042365112580946315\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.25766871165644173,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.25766871165644173,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.26785714285714285,\n\
\ \"acc_stderr\": 0.04203277291467763,\n \"acc_norm\": 0.26785714285714285,\n\
\ \"acc_norm_stderr\": 0.04203277291467763\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.21359223300970873,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.21359223300970873,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2264957264957265,\n\
\ \"acc_stderr\": 0.027421007295392916,\n \"acc_norm\": 0.2264957264957265,\n\
\ \"acc_norm_stderr\": 0.027421007295392916\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.27330779054916987,\n\
\ \"acc_stderr\": 0.015936681062628556,\n \"acc_norm\": 0.27330779054916987,\n\
\ \"acc_norm_stderr\": 0.015936681062628556\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.023176298203992016,\n\
\ \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.023176298203992016\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24916201117318434,\n\
\ \"acc_stderr\": 0.014465893829859926,\n \"acc_norm\": 0.24916201117318434,\n\
\ \"acc_norm_stderr\": 0.014465893829859926\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.25163398692810457,\n \"acc_stderr\": 0.024848018263875195,\n\
\ \"acc_norm\": 0.25163398692810457,\n \"acc_norm_stderr\": 0.024848018263875195\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.27009646302250806,\n\
\ \"acc_stderr\": 0.025218040373410612,\n \"acc_norm\": 0.27009646302250806,\n\
\ \"acc_norm_stderr\": 0.025218040373410612\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.3055555555555556,\n \"acc_stderr\": 0.025630824975621358,\n\
\ \"acc_norm\": 0.3055555555555556,\n \"acc_norm_stderr\": 0.025630824975621358\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.20567375886524822,\n \"acc_stderr\": 0.024112138950471887,\n \
\ \"acc_norm\": 0.20567375886524822,\n \"acc_norm_stderr\": 0.024112138950471887\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.26140808344198174,\n\
\ \"acc_stderr\": 0.01122252816977131,\n \"acc_norm\": 0.26140808344198174,\n\
\ \"acc_norm_stderr\": 0.01122252816977131\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.22794117647058823,\n \"acc_stderr\": 0.025483081468029804,\n\
\ \"acc_norm\": 0.22794117647058823,\n \"acc_norm_stderr\": 0.025483081468029804\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.2696078431372549,\n \"acc_stderr\": 0.017952449196987862,\n \
\ \"acc_norm\": 0.2696078431372549,\n \"acc_norm_stderr\": 0.017952449196987862\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.16363636363636364,\n\
\ \"acc_stderr\": 0.03543433054298678,\n \"acc_norm\": 0.16363636363636364,\n\
\ \"acc_norm_stderr\": 0.03543433054298678\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.2163265306122449,\n \"acc_stderr\": 0.02635891633490404,\n\
\ \"acc_norm\": 0.2163265306122449,\n \"acc_norm_stderr\": 0.02635891633490404\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.21393034825870647,\n\
\ \"acc_stderr\": 0.028996909693328927,\n \"acc_norm\": 0.21393034825870647,\n\
\ \"acc_norm_stderr\": 0.028996909693328927\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.23493975903614459,\n\
\ \"acc_stderr\": 0.03300533186128922,\n \"acc_norm\": 0.23493975903614459,\n\
\ \"acc_norm_stderr\": 0.03300533186128922\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.2807017543859649,\n \"acc_stderr\": 0.034462962170884265,\n\
\ \"acc_norm\": 0.2807017543859649,\n \"acc_norm_stderr\": 0.034462962170884265\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.19951040391676866,\n\
\ \"mc1_stderr\": 0.013989929967559647,\n \"mc2\": 0.3904917327630846,\n\
\ \"mc2_stderr\": 0.014874434046360765\n }\n}\n```"
repo_url: https://huggingface.co/beomi/KoRWKV-6B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|arc:challenge|25_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hellaswag|10_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T17-42-58.699001.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-04T17-42-58.699001.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T17-42-58.699001.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-04T17-42-58.699001.parquet'
- config_name: results
data_files:
- split: 2023_10_04T17_42_58.699001
path:
- results_2023-10-04T17-42-58.699001.parquet
- split: latest
path:
- results_2023-10-04T17-42-58.699001.parquet
---
# Dataset Card for Evaluation run of beomi/KoRWKV-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/beomi/KoRWKV-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beomi__KoRWKV-6B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-04T17:42:58.699001](https://huggingface.co/datasets/open-llm-leaderboard/details_beomi__KoRWKV-6B/blob/main/results_2023-10-04T17-42-58.699001.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24676461098562108,
"acc_stderr": 0.03124881475395607,
"acc_norm": 0.2477296300407201,
"acc_norm_stderr": 0.03126094663304752,
"mc1": 0.19951040391676866,
"mc1_stderr": 0.013989929967559647,
"mc2": 0.3904917327630846,
"mc2_stderr": 0.014874434046360765
},
"harness|arc:challenge|25": {
"acc": 0.19283276450511946,
"acc_stderr": 0.01152905546566333,
"acc_norm": 0.22098976109215018,
"acc_norm_stderr": 0.012124929206818258
},
"harness|hellaswag|10": {
"acc": 0.2930691097390958,
"acc_stderr": 0.0045423962699992155,
"acc_norm": 0.3218482374029078,
"acc_norm_stderr": 0.004662303395239619
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.32592592592592595,
"acc_stderr": 0.040491220417025055,
"acc_norm": 0.32592592592592595,
"acc_norm_stderr": 0.040491220417025055
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.2236842105263158,
"acc_stderr": 0.03391160934343602,
"acc_norm": 0.2236842105263158,
"acc_norm_stderr": 0.03391160934343602
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.025757559893106734,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.025757559893106734
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.22916666666666666,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.22916666666666666,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.031568093627031744,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.031568093627031744
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20425531914893616,
"acc_stderr": 0.026355158413349428,
"acc_norm": 0.20425531914893616,
"acc_norm_stderr": 0.026355158413349428
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.22807017543859648,
"acc_stderr": 0.03947152782669415,
"acc_norm": 0.22807017543859648,
"acc_norm_stderr": 0.03947152782669415
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.23448275862068965,
"acc_stderr": 0.035306258743465914,
"acc_norm": 0.23448275862068965,
"acc_norm_stderr": 0.035306258743465914
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.02141168439369419,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.02141168439369419
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.1984126984126984,
"acc_stderr": 0.035670166752768635,
"acc_norm": 0.1984126984126984,
"acc_norm_stderr": 0.035670166752768635
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.267741935483871,
"acc_stderr": 0.025189006660212374,
"acc_norm": 0.267741935483871,
"acc_norm_stderr": 0.025189006660212374
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.16,
"acc_stderr": 0.036845294917747094,
"acc_norm": 0.16,
"acc_norm_stderr": 0.036845294917747094
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.03173071239071724,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.03173071239071724
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.23834196891191708,
"acc_stderr": 0.03074890536390991,
"acc_norm": 0.23834196891191708,
"acc_norm_stderr": 0.03074890536390991
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2512820512820513,
"acc_stderr": 0.021992016662370557,
"acc_norm": 0.2512820512820513,
"acc_norm_stderr": 0.021992016662370557
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.22962962962962963,
"acc_stderr": 0.02564410863926763,
"acc_norm": 0.22962962962962963,
"acc_norm_stderr": 0.02564410863926763
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3067226890756303,
"acc_stderr": 0.02995382389188704,
"acc_norm": 0.3067226890756303,
"acc_norm_stderr": 0.02995382389188704
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2251655629139073,
"acc_stderr": 0.03410435282008936,
"acc_norm": 0.2251655629139073,
"acc_norm_stderr": 0.03410435282008936
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.23302752293577983,
"acc_stderr": 0.018125669180861493,
"acc_norm": 0.23302752293577983,
"acc_norm_stderr": 0.018125669180861493
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.36574074074074076,
"acc_stderr": 0.03284738857647207,
"acc_norm": 0.36574074074074076,
"acc_norm_stderr": 0.03284738857647207
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.27941176470588236,
"acc_stderr": 0.031493281045079556,
"acc_norm": 0.27941176470588236,
"acc_norm_stderr": 0.031493281045079556
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.25738396624472576,
"acc_stderr": 0.028458820991460295,
"acc_norm": 0.25738396624472576,
"acc_norm_stderr": 0.028458820991460295
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2062780269058296,
"acc_stderr": 0.027157150479563824,
"acc_norm": 0.2062780269058296,
"acc_norm_stderr": 0.027157150479563824
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2900763358778626,
"acc_stderr": 0.03980066246467765,
"acc_norm": 0.2900763358778626,
"acc_norm_stderr": 0.03980066246467765
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2892561983471074,
"acc_stderr": 0.04139112727635463,
"acc_norm": 0.2892561983471074,
"acc_norm_stderr": 0.04139112727635463
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.042365112580946315,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.042365112580946315
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.25766871165644173,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.25766871165644173,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.26785714285714285,
"acc_stderr": 0.04203277291467763,
"acc_norm": 0.26785714285714285,
"acc_norm_stderr": 0.04203277291467763
},
"harness|hendrycksTest-management|5": {
"acc": 0.21359223300970873,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.21359223300970873,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2264957264957265,
"acc_stderr": 0.027421007295392916,
"acc_norm": 0.2264957264957265,
"acc_norm_stderr": 0.027421007295392916
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.27330779054916987,
"acc_stderr": 0.015936681062628556,
"acc_norm": 0.27330779054916987,
"acc_norm_stderr": 0.015936681062628556
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.023176298203992016,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.023176298203992016
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24916201117318434,
"acc_stderr": 0.014465893829859926,
"acc_norm": 0.24916201117318434,
"acc_norm_stderr": 0.014465893829859926
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.25163398692810457,
"acc_stderr": 0.024848018263875195,
"acc_norm": 0.25163398692810457,
"acc_norm_stderr": 0.024848018263875195
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.27009646302250806,
"acc_stderr": 0.025218040373410612,
"acc_norm": 0.27009646302250806,
"acc_norm_stderr": 0.025218040373410612
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.3055555555555556,
"acc_stderr": 0.025630824975621358,
"acc_norm": 0.3055555555555556,
"acc_norm_stderr": 0.025630824975621358
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.20567375886524822,
"acc_stderr": 0.024112138950471887,
"acc_norm": 0.20567375886524822,
"acc_norm_stderr": 0.024112138950471887
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.26140808344198174,
"acc_stderr": 0.01122252816977131,
"acc_norm": 0.26140808344198174,
"acc_norm_stderr": 0.01122252816977131
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.22794117647058823,
"acc_stderr": 0.025483081468029804,
"acc_norm": 0.22794117647058823,
"acc_norm_stderr": 0.025483081468029804
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2696078431372549,
"acc_stderr": 0.017952449196987862,
"acc_norm": 0.2696078431372549,
"acc_norm_stderr": 0.017952449196987862
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.16363636363636364,
"acc_stderr": 0.03543433054298678,
"acc_norm": 0.16363636363636364,
"acc_norm_stderr": 0.03543433054298678
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.2163265306122449,
"acc_stderr": 0.02635891633490404,
"acc_norm": 0.2163265306122449,
"acc_norm_stderr": 0.02635891633490404
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.21393034825870647,
"acc_stderr": 0.028996909693328927,
"acc_norm": 0.21393034825870647,
"acc_norm_stderr": 0.028996909693328927
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-virology|5": {
"acc": 0.23493975903614459,
"acc_stderr": 0.03300533186128922,
"acc_norm": 0.23493975903614459,
"acc_norm_stderr": 0.03300533186128922
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.034462962170884265,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.034462962170884265
},
"harness|truthfulqa:mc|0": {
"mc1": 0.19951040391676866,
"mc1_stderr": 0.013989929967559647,
"mc2": 0.3904917327630846,
"mc2_stderr": 0.014874434046360765
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
BangumiBase/isekaidecheatskill | 2023-10-04T18:51:26.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Isekai De Cheat Skill
This is the image base of bangumi Isekai de Cheat Skill, we detected 22 characters, 1032 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 309 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 23 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 17 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 10 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 24 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 9 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 29 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 8 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 59 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 76 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 19 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 9 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 7 | [Download](12/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 13 | 16 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 6 | [Download](14/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 15 | 10 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 15 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 11 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 73 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 52 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 240 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
Mtjay/myDataSet | 2023-10-10T23:21:53.000Z | [
"license:other",
"region:us"
] | Mtjay | null | null | null | 0 | 0 | ---
license: other
license_name: my-license
license_link: LICENSE
---
|
BangumiBase/mawarupenguindrum | 2023-10-04T19:21:03.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Mawaru Penguindrum
This is the image base of bangumi Mawaru Penguindrum, we detected 23 characters, 1725 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 19 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 177 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 81 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 18 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 76 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 206 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 19 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 14 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 64 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 11 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 313 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 24 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 11 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 306 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 19 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 19 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 13 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 16 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 37 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 17 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 17 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 8 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 240 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/striketheblood | 2023-10-04T20:43:50.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | BangumiBase | null | null | null | 0 | 0 | ---
license: mit
tags:
- art
size_categories:
- 1K<n<10K
---
# Bangumi Image Base of Strike The Blood
This is the image base of bangumi Strike the Blood, we detected 66 characters, 5038 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 781 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 24 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 144 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 60 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 16 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 49 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 149 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 40 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 44 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 1115 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 70 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 55 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 128 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 121 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 14 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 31 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 48 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 26 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 31 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 53 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 124 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 51 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 89 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 21 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 24 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 54 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 28 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 31 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 26 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 36 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 14 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 19 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 13 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 60 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 19 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 8 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 9 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 178 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 18 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 32 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 105 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 210 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 77 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 49 | [Download](43/dataset.zip) |  |  |  |  |  |  |  |  |
| 44 | 11 | [Download](44/dataset.zip) |  |  |  |  |  |  |  |  |
| 45 | 23 | [Download](45/dataset.zip) |  |  |  |  |  |  |  |  |
| 46 | 34 | [Download](46/dataset.zip) |  |  |  |  |  |  |  |  |
| 47 | 13 | [Download](47/dataset.zip) |  |  |  |  |  |  |  |  |
| 48 | 14 | [Download](48/dataset.zip) |  |  |  |  |  |  |  |  |
| 49 | 18 | [Download](49/dataset.zip) |  |  |  |  |  |  |  |  |
| 50 | 8 | [Download](50/dataset.zip) |  |  |  |  |  |  |  |  |
| 51 | 14 | [Download](51/dataset.zip) |  |  |  |  |  |  |  |  |
| 52 | 86 | [Download](52/dataset.zip) |  |  |  |  |  |  |  |  |
| 53 | 27 | [Download](53/dataset.zip) |  |  |  |  |  |  |  |  |
| 54 | 12 | [Download](54/dataset.zip) |  |  |  |  |  |  |  |  |
| 55 | 16 | [Download](55/dataset.zip) |  |  |  |  |  |  |  |  |
| 56 | 7 | [Download](56/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 57 | 16 | [Download](57/dataset.zip) |  |  |  |  |  |  |  |  |
| 58 | 19 | [Download](58/dataset.zip) |  |  |  |  |  |  |  |  |
| 59 | 6 | [Download](59/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 60 | 22 | [Download](60/dataset.zip) |  |  |  |  |  |  |  |  |
| 61 | 6 | [Download](61/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 62 | 6 | [Download](62/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 63 | 16 | [Download](63/dataset.zip) |  |  |  |  |  |  |  |  |
| 64 | 56 | [Download](64/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 314 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
atmallen/sloppy_addition_AB_1.0 | 2023-10-05T17:49:35.000Z | [
"region:us"
] | atmallen | null | null | null | 0 | 0 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: true_label
dtype: bool
- name: id
dtype: int64
splits:
- name: train
num_bytes: 17092688
num_examples: 400000
- name: validation
num_bytes: 1709898
num_examples: 40000
- name: test
num_bytes: 1707310
num_examples: 40000
download_size: 0
dataset_size: 20509896
---
# Dataset Card for "sloppy_addition_AB_1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sleepyboyeyes/Bella | 2023-10-04T20:59:12.000Z | [
"region:us"
] | sleepyboyeyes | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-0-5000 | 2023-10-05T19:06:08.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-2-5002 | 2023-10-04T18:29:00.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-1-5001 | 2023-10-04T18:29:00.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-3-5003 | 2023-10-04T18:29:01.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-5-5005 | 2023-10-04T18:29:01.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
JoSw-14/chem-4-5004 | 2023-10-04T18:29:01.000Z | [
"region:us"
] | JoSw-14 | null | null | null | 0 | 0 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.