datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
yayah/sewee | ---
license: bigscience-openrail-m
---
|
goodfellowliu/City100 | ---
license: apache-2.0
---
|
cyzhh/MMOS | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- math
- reasoning
- code
size_categories:
- 100K<n<1M
---
[ArXiv](https://arxiv.org/abs/2403.00799) | [Models](https://pan.quark.cn/s/2d16e640ed07) | [Data](https://huggingface.co/datasets/cyzhh/MMOS) | [Code](https://github.com/cyzhh/MMOS) |
You can download the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("cyzhh/MMOS")
```
### Schema
Each dataset row has the following structure
```python
{
"idx": ..., # problem id
"prompt": ..., # problem
"completion": ... # reasoning path with python
}
```
### License
We do not alter the license of any of the underlying data.
### Citation
For the MMOS, cite
```
@misc{chen2024empirical,
title={An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning},
author={Zui Chen and Yezeng Chen and Jiaqi Han and Zhijie Huang and Ji Qi and Yi Zhou},
year={2024},
eprint={2403.00799},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
TinyPixel/lima-u2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1778347
num_examples: 780
download_size: 1041113
dataset_size: 1778347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-u2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835577 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: mbartolo/roberta-large-synqa
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. |
ravithejads/test | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_translated
dtype: string
- name: input_translated
dtype: string
- name: output_translated
dtype: string
splits:
- name: train
num_bytes: 33589
num_examples: 10
download_size: 41769
dataset_size: 33589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hippocrates/PubMed_Summ_train | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 54379474
num_examples: 26570
download_size: 29277288
dataset_size: 54379474
---
# Dataset Card for "PubMed_Summ_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ThWu/Chat_22k | ---
language:
- en
dataset_info:
features:
- name: prompt
dtype: string
- name: question_id
dtype: int64
splits:
- name: train
num_bytes: 5567423
num_examples: 22000
download_size: 3590268
dataset_size: 5567423
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
llm-aes/asap-7-original | ---
dataset_info:
features:
- name: essay_id
dtype: int64
- name: essay_set
dtype: int64
- name: essay
dtype: string
- name: rater1_domain1
dtype: int64
- name: rater2_domain1
dtype: int64
- name: domain1_score
dtype: int64
- name: rater1_trait1
dtype: float64
- name: rater1_trait2
dtype: float64
- name: rater1_trait3
dtype: float64
- name: rater1_trait4
dtype: float64
- name: rater2_trait1
dtype: float64
- name: rater2_trait2
dtype: float64
- name: rater2_trait3
dtype: float64
- name: rater2_trait4
dtype: float64
- name: rubrics
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4907573
num_examples: 1569
download_size: 842177
dataset_size: 4907573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-HadiPourmousa__TextSummarization-HadiPourmousa__TextSum-31dfb4-1463253932 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- HadiPourmousa/TextSummarization
eval_info:
task: summarization
model: shivaniNK8/t5-small-finetuned-cnn-news
metrics: []
dataset_name: HadiPourmousa/TextSummarization
dataset_config: HadiPourmousa--TextSummarization
dataset_split: train
col_mapping:
text: Text
target: Title
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: shivaniNK8/t5-small-finetuned-cnn-news
* Dataset: HadiPourmousa/TextSummarization
* Config: HadiPourmousa--TextSummarization
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@marcmaxmeister](https://huggingface.co/marcmaxmeister) for evaluating this model. |
xcz0/Aspect-Based_Sentiment_Analysis_for_Catering | ---
task_categories:
- text-classification
size_categories:
- 10M<n<100M
---
# 说明
数据集来源于[AI Challenger 2018](https://github.com/AIChallenger/AI_Challenger_2018)
sentiment_analysis_trainingset.csv 为训练集数据文件,共105000条评论数据
sentiment_analysis_validationset.csv 为验证集数据文件,共15000条评论数据
sentiment_analysis_testa.csv 为测试集A数据文件,共15000条评论数据
数据集分为训练、验证、测试A与测试B四部分。数据集中的评价对象按照粒度不同划分为两个层次,层次一为粗粒度的评价对象,例如评论文本中涉及的服务、位置等要素;层次二为细粒度的情感对象,例如“服务”属性中的“服务人员态度”、“排队等候时间”等细粒度要素。评价对象的具体划分如下表所示。
The dataset is divided into four parts: training, validation, test A and test B. This dataset builds a two-layer labeling system according to the evaluation granularity: the first layer is the coarse-grained evaluation object, such as “service” and “location”; the second layer is the fine-grained emotion object, such as “waiter’s attitude” and “wait time” in “service” category. The specific description is shown in the following table.
|层次一(The first layer)|层次二(The second layer)|
|---|---|
|位置(location)|交通是否便利(traffic convenience)|
|-|距离商圈远近(distance from business district)|
|-|是否容易寻找(easy to find)|
|服务(service)|排队等候时间(wait time)|
|-|服务人员态度(waiter’s attitude)|
|-|是否容易停车(parking convenience)|
|-|点菜/上菜速度(serving speed)|
|价格(price)|价格水平(price level)|
|-|性价比(cost-effective)|
|-|折扣力度(discount)|
|环境(environment)|装修情况(decoration)|
|-|嘈杂情况(noise)|
|-|就餐空间(space)|
|-|卫生情况(cleaness)|
|菜品(dish)|分量(portion)|
|-|口感(taste)|
|-|外观(look)|
|-|推荐程度(recommendation)|
|其他(others)|本次消费感受(overall experience)|
|-|再次消费的意愿(willing to consume again)|
每个细粒度要素的情感倾向有四种状态:正向、中性、负向、未提及。使用[1,0,-1,-2]四个值对情感倾向进行描述,情感倾向值及其含义对照表如下所示:
There are four sentimental types for every fine-grained element: Positive, Neutral, Negative and Not mentioned, which are labelled as 1, 0, -1 and-2. The meaning of these four labels are listed below.
|情感倾向值(Sentimental labels)|含义(Meaning)|
|---|---|
|1|正面情感(Positive)
|0|中性情感(Neutral)
|-1|负面情感(Negative)
|-2|情感倾向未提及(Not mentioned)
数据标注示例如下:
An example of one labelled review:
>味道不错的面馆,性价比也相当之高,分量很足~女生吃小份,胃口小的,可能吃不完呢,。环境在面馆来说算是好的,至少看上去堂子很亮,也比较干净,一般苍蝇馆子还是比不上这个卫生状况的。中午饭点的时候,人很多,人行道上也是要坐满的,隔壁的冒菜馆子,据说是一家,有时候也会开放出来坐吃面的人。
|层次一(The first layer)|层次二(The second layer)|标注 (Label)|
|---|---|---|
|位置(location)|交通是否便利(traffic convenience)|-2
|-|距离商圈远近(distance from business district)|-2
|-|是否容易寻找(easy to find)|-2
|服务(service)|排队等候时间(wait time)|-2
|-|服务人员态度(waiter’s attitude)|-2
|-|是否容易停车(parking convenience)|-2
|-|点菜/上菜速度(serving speed)|-2
|价格(price)|价格水平(price level)|-2
|-|性价比(cost-effective)|1
|-|折扣力度(discount)|-2
|环境(environment)|装修情况(decoration)|1
|-|嘈杂情况(noise)|-2
|-|就餐空间(space)|-2
|-|卫生情况(cleaness)|1
|菜品(dish)|分量(portion)|1
|-|口感(taste)|1
|-|外观(look)|-2
|-|推荐程度(recommendation)|-2
|其他(others)|本次消费感受(overall experience)|1
|-|再次消费的意愿(willing to consume again)|-2 |
edbeeching/prj_gia_dataset_atari_2B_atari_yarsrevenge_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the atari_yarsrevenge environment, sample for the policy atari_2B_atari_yarsrevenge_1111
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
|
arbitropy/ner-test | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: pos_tags
sequence: int64
- name: pos
sequence: string
- name: ner_tags
sequence: int64
- name: ner
sequence: string
- name: tokens
sequence: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 512824534.8277444
num_examples: 282300
- name: test
num_bytes: 1816594.1722555596
num_examples: 1000
download_size: 102628641
dataset_size: 514641129.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
irds/mmarco_zh_dev_small | ---
pretty_name: '`mmarco/zh/dev/small`'
viewer: false
source_datasets: ['irds/mmarco_zh']
task_categories:
- text-retrieval
---
# Dataset Card for `mmarco/zh/dev/small`
The `mmarco/zh/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/zh/dev/small).
# Data
This dataset provides:
- `queries` (i.e., topics); count=6,980
- `qrels`: (relevance assessments); count=7,437
- For `docs`, use [`irds/mmarco_zh`](https://huggingface.co/datasets/irds/mmarco_zh)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mmarco_zh_dev_small', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mmarco_zh_dev_small', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Bonifacio2021MMarco,
title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
journal={arXiv:2108.13897}
}
```
|
skt/KVQA | ---
language:
- ko
license: other
license_name: korean-vqa-license
license_link: https://sktbrain.github.io/KVQA/license.html
pretty_name: KVQA
size_categories:
- 100K<n<1M
task_categories:
- visual-question-answering
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answers
sequence:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answerable
dtype: int32
- name: answer_type
dtype: string
config_name: kvqa
splits:
- name: all
num_examples: 100445
---
We also provide KVQA blog pages in both [Korean](https://sktbrain.github.io/KVQA/) and [English](https://sktbrain.github.io/KVQA/index-en.html).
SK텔레콤은 사회적 가치 추구를 위한 다양한 사업을 진행하고 있습니다. 기업이 먼저 앞장서서 사회 속에 혼재된 사회적 이슈를 발굴하고, 이를 해결하기 위한 사회적 책임을 지는 것이 지속가능한 경영의 출발이라고 생각합니다.
2019년 4월부터 이 기술의 현지화를 위해 사회적 기업인 [테스트웍스](http://www.testworks.co.kr)와 협업하여 자발적으로 지원한 우리나라의 시각장애인들로부터 데이터를 수집하였고, 영문으로 공개된 [VizWiz 데이터셋](https://vizwiz.org/tasks-and-datasets/vqa/) 중 현지화가 가능한 일부를 한국어로 번역하여 시각적 질의응답 기술을 한국어로 학습시킬 수 있는 데이터셋을 만들었습니다.
# 논문
## AI for Social Good workshop at NeurIPS (Kim & Lim et al., 2019)
[PDF](https://aiforsocialgood.github.io/neurips2019/accepted/track1/pdfs/44_aisg_neurips2019.pdf)

# 시각적 질의응답
시각적 질의응답은 이미지가 주어지고 그 이미지에 대한 질문이 주어졌을 때, 이미지를 이해하여 자연어로 질문에 대한 답을 주는 기술입니다.

# KVQA 데이터셋
KVQA 데이터셋은 T-Brain이 진행하는 사회적 가치 추구를 위한 프로젝트의 일환으로서, 한국형 시각적 질의응답(Visual Question Answering) 데이터셋입니다. KVQA 데이터셋은 한국 시각장애인들이 찍은 사진과 그 사진에 대한 질문과 서로 다른 열 명의 복수 답으로 구성되어 있습니다.
현재는 총 3만 건의 이미지와 질문, 그리고 30만 건의 답변으로 구성되어 있으나, 올해 말까지 10만 건의 이미지와 질문, 그리고 100만 건의 답변으로 증대할 예정입니다.
본 데이터셋은 교육 및 연구목적으로 사용이 가능하며, 자세한 내용은 첨부된 라이선스를 참조해주시기 바랍니다. KVQA 데이터셋을 통해 한국형 시각적 질의응답 기술 발전과 사회적 가치를 동시에 추구할 수 있기를 바랍니다.

## 통계
### v1.0 (2020년 1월)
| | 전체 (%) | 예/아니오 (%) | 숫자 (%) | 기타 (%) | 답변불가능 (%) |
|:----------|:-------------|:-------------|:-------------|:---------------|:--------------|
| 이미지 수 | 100,445 (100) | 6,124 (6.10) | 9,332 (9.29) | 69,069 (68.76) | 15,920 (15.85) |
| 질문 수 | 100,445 (100) | 6,124 (6.10) | 9,332 (9.29) | 69,069 (68.76) | 15,920 (15.85) |
| 답변 수 | 1,004,450 (100)| 61,240 (6.10)| 93,320 (9.29)| 690,690 (68.76)| 159,200 (15.85)|
## 성능 측정
한 질문 당 열 명의 서로 다른 사람들로부터 수집된 답을 이용해 정확도를 측정합니다. 열 개의 답변 중 3개 이상을 맞추었다면 100%가 되며 3개 미만일 때 비례적으로 부분 점수를 획득합니다. 최종적으로 성능 보고를 할 때에는 10개의 답변 중 9개를 선택하는 서로 다른 정확도 측정을 10회 실시하여 평균 점수를 보고해야 합니다. 이 성능 측정은 [VQA Evaluation](https://visualqa.org/evaluation.html) 방법과 같습니다.
## 시각적 질의응답 데이터
### 데이터 항목 설명
| Name | Type | Description |
|:---------------------------------|:---------|:---------------------------------------------------------|
| VQA | `[dict]` | 시각적 질의응답 정보를 담은 `dict`의 `list` |
| +- image | `str` | 이미지 파일의 이름 |
| +- source | `str` | 데이터의 출처 `("kvqa", "vizwiz")` |
| +- answers | `[dict]` | 응답 정보를 담은 `dict` 10개의 `list` |
| +--- answer | `str` | 시각적 질의에 대한 응답 |
| +--- answer_confidence | `str` | 응답에 대한 신뢰도 `("yes", "maybe", "no")` |
| +- question | `str` | 이미지에 관련한 질의 |
| +- answerable | `int` | 응답 가능 여부 `(0, 1)` |
| +- answer_type | `str` | 응답의 종류 `("number", "yes/no", "unanswerable", "other")` |
### 데이터 예시
```json
[{
"image": "KVQA_190712_00143.jpg",
"source": "kvqa",
"answers": [{
"answer": "피아노",
"answer_confidence": "yes"
}, {
"answer": "피아노",
"answer_confidence": "yes"
}, {
"answer": "피아노 치고있다",
"answer_confidence": "maybe"
}, {
"answer": "unanswerable",
"answer_confidence": "maybe"
}, {
"answer": "게임",
"answer_confidence": "maybe"
}, {
"answer": "피아노 앞에서 무언가를 보고 있음",
"answer_confidence": "maybe"
}, {
"answer": "피아노치고있어",
"answer_confidence": "maybe"
}, {
"answer": "피아노치고있어요",
"answer_confidence": "maybe"
}, {
"answer": "피아노 연주",
"answer_confidence": "maybe"
}, {
"answer": "피아노 치기",
"answer_confidence": "yes"
}],
"question": "방에 있는 사람은 지금 뭘하고 있지?",
"answerable": 1,
"answer_type": "other"
},
{
"image": "VizWiz_train_000000008148.jpg",
"source": "vizwiz",
"answers": [{
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "티비 리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "maybe"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}, {
"answer": "리모컨",
"answer_confidence": "yes"
}],
"question": "이것은 무엇인가요?",
"answerable": 1,
"answer_type": "other"
}
]
```
# 라이선스
* [Korean VQA License](https://sktbrain.github.io/KVQA/license.html) for the KVQA Dataset
* Creative Commons License Deed ([CC BY 4.0](https://creativecommons.org/licenses/by/4.0/deed.ko)) for the VizWiz subset
* GNU GPL v3.0 for the Code |
normanhus/museum_collections | ---
license: apache-2.0
---
|
Gilbran/Glossario | ---
pretty_name: GlossarioInstivo
size_categories:
- 10M<n<100M
---
|
dialog_re | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: dialogre
pretty_name: DialogRE
tags:
- relation-extraction
dataset_info:
features:
- name: dialog
sequence: string
- name: relation_data
sequence:
- name: x
dtype: string
- name: y
dtype: string
- name: x_type
dtype: string
- name: y_type
dtype: string
- name: r
sequence: string
- name: rid
sequence: int32
- name: t
sequence: string
config_name: dialog_re
splits:
- name: train
num_bytes: 1520940
num_examples: 1073
- name: test
num_bytes: 472306
num_examples: 357
- name: validation
num_bytes: 490580
num_examples: 358
download_size: 3816234
dataset_size: 2483826
---
# Dataset Card for [DialogRE]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DialogRE Homepage](https://dataset.org/dialogre/)
- **Repository:** [DialogRE Repository](https://github.com/nlpdata/dialogre)
- **Paper:** [Arxiv](https://arxiv.org/abs/2004.08056v1)
- **Point of Contact:** [dialogre@dataset.org](mailto:dialogre@dataset.org)
### Dataset Summary
The DialogRE dataset is the first human-annotated dialogue-based relation extraction (RE) dataset, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. DialogRE can also act as a platform for studying cross-sentence RE as most facts span multiple sentences. Specifically, the dataset annotate all occurrences of 36 possible relation types that exist between pairs of arguments in the 1,788 dialogues originating from the complete transcripts of Friends (in English).
### Supported Tasks and Leaderboards
* `other-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists of the prediction of relation between two arguments that appear in a dialogue. Success on this task is typically measured by achieving a *high* [F1 Score](https://huggingface.co/metrics/f1).
### Languages
The dialogues in the dataset is in English originating from the transcripts of Friends. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point consists of a dialogue between speakers as a list of sentences. This is followed by the annotations of the relations between the entities in the dialog.
An example from the DialogRE train set looks as follows:
```
{'dialog': ["Speaker 1: It's been an hour and not one of my classmates has shown up! I tell you, when I actually die some people are gonna get seriously haunted!",
'Speaker 2: There you go! Someone came!',
"Speaker 1: Ok, ok! I'm gonna go hide! Oh, this is so exciting, my first mourner!",
'Speaker 3: Hi, glad you could come.',
'Speaker 2: Please, come in.',
"Speaker 4: Hi, you're Chandler Bing, right? I'm Tom Gordon, I was in your class.",
'Speaker 2: Oh yes, yes... let me... take your coat.',
"Speaker 4: Thanks... uh... I'm so sorry about Ross, it's...",
'Speaker 2: At least he died doing what he loved... watching blimps.',
'Speaker 1: Who is he?',
'Speaker 2: Some guy, Tom Gordon.',
"Speaker 1: I don't remember him, but then again I touched so many lives.",
'Speaker 3: So, did you know Ross well?',
"Speaker 4: Oh, actually I barely knew him. Yeah, I came because I heard Chandler's news. D'you know if he's seeing anyone?",
'Speaker 3: Yes, he is. Me.',
'Speaker 4: What? You... You... Oh! Can I ask you a personal question? Ho-how do you shave your beard so close?',
"Speaker 2: Ok Tommy, that's enough mourning for you! Here we go, bye bye!!",
'Speaker 4: Hey, listen. Call me.',
'Speaker 2: Ok!'],
'relation_data': {'r': [['per:alternate_names'],
['per:alumni'],
['per:alternate_names'],
['per:alumni', 'per:positive_impression'],
['per:alternate_names'],
['unanswerable']],
'rid': [[30], [4], [30], [4, 1], [30], [37]],
't': [[''], [''], [''], ['', 'call me'], [''], ['']],
'x': ['Speaker 2',
'Speaker 2',
'Speaker 4',
'Speaker 4',
'Speaker 4',
'Speaker 1'],
'x_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER'],
'y': ['Chandler Bing',
'Speaker 4',
'Tom Gordon',
'Speaker 2',
'Tommy',
'Tommy'],
'y_type': ['PER', 'PER', 'PER', 'PER', 'PER', 'PER']}}
```
### Data Fields
* `dialog`
* List of dialog spoken between the speakers
* List of annotations per dialog per argument
* `x` : First entity
* `y` : Second entity
* `x_type` : Type of the first entity
* `y_type`: Type of the second entity
* `r` : List of relations
* `rid`: List of relation IDs
* `t`: List of relation Trigger words
### Data Splits
The data is split into a training, validation and test set as per the original dataset split.
| | train | validation | test |
| --------------------- |-------:|------------:|------:|
| Input dialog examples | 1073 | 358 | 357 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
DialogRE dataset is intended for non-commercial research purpose only
### Citation Information
```
@inproceedings{yu2020dialogue,
title={Dialogue-Based Relation Extraction},
author={Yu, Dian and Sun, Kai and Cardie, Claire and Yu, Dong},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/2004.08056v1}
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. |
nlpso/m0_qualitative_analysis_ref_cmbert_io | ---
language:
- fr
multilinguality:
- monolingual
task_categories:
- token-classification
---
# m0_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dataset parameters
* Approach : M0
* Dataset type : ground-truth
* Tokenizer : [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner)
* Tagging format : IO
* Counts :
* Train : 6084
* Dev : 676
* Test : 1685
* Associated fine-tuned model : [nlpso/m0_flat_ner_ref_cmbert_io](https://huggingface.co/nlpso/m0_flat_ner_ref_cmbert_io)
## Entity types
Abbreviation|Description
-|-
O |Outside of a named entity
PER |Person or company name
ACT |Person or company professional activity
TITRE |Distinction
LOC |Street name
CARDINAL |Street number
FT |Geographical feature
## How to use this dataset
```python
from datasets import load_dataset
train_dev_test = load_dataset("nlpso/m0_qualitative_analysis_ref_cmbert_io")
|
mitanshu17/Nuscenes | ---
license: apache-2.0
---
|
vinhnq29/ViMathQA | ---
dataset_info:
- config_name: test_v1
features:
- name: instruction
dtype: string
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: string
- name: right_choice
dtype: string
splits:
- name: train
num_bytes: 511629
num_examples: 1104
- name: test
num_bytes: 511629
num_examples: 1104
- name: base_models
num_bytes: 442114.1902173913
num_examples: 954
- name: base_models_test
num_bytes: 442114.1902173913
num_examples: 954
download_size: 818786
dataset_size: 1907486.3804347827
- config_name: train_v1
features:
- name: segments
list:
- name: label
dtype: bool
- name: text
dtype: string
splits:
- name: input_output_vinallama
num_bytes: 3806969
num_examples: 7107
- name: input_output_zephyr
num_bytes: 3509173
num_examples: 7107
- name: input_output_vistral
num_bytes: 3464945
num_examples: 7107
- name: input_output_wizardmath
num_bytes: 4181916
num_examples: 7107
- name: input_output_qwen
num_bytes: 3808346
num_examples: 7107
- name: input_output_metamath
num_bytes: 4184665
num_examples: 7107
download_size: 9528510
dataset_size: 22956014
configs:
- config_name: test_v1
data_files:
- split: train
path: test_v1/train-*
- split: test
path: test_v1/test-*
- split: base_models
path: test_v1/base_models-*
- split: base_models_test
path: test_v1/base_models_test-*
- config_name: train_v1
data_files:
- split: input_output_vinallama
path: train_v1/input_output_vinallama-*
- split: input_output_zephyr
path: train_v1/input_output_zephyr-*
- split: input_output_vistral
path: train_v1/input_output_vistral-*
- split: input_output_wizardmath
path: train_v1/input_output_wizardmath-*
- split: input_output_qwen
path: train_v1/input_output_qwen-*
- split: input_output_metamath
path: train_v1/input_output_metamath-*
---
|
open-llm-leaderboard/details_Sharathhebbar24__SSH_355M | ---
pretty_name: Evaluation run of Sharathhebbar24/SSH_355M
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Sharathhebbar24/SSH_355M](https://huggingface.co/Sharathhebbar24/SSH_355M) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Sharathhebbar24__SSH_355M\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-10T16:37:52.949770](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__SSH_355M/blob/main/results_2024-02-10T16-37-52.949770.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2757917484580653,\n\
\ \"acc_stderr\": 0.031327907514240604,\n \"acc_norm\": 0.27776537467722157,\n\
\ \"acc_norm_stderr\": 0.032165569179046345,\n \"mc1\": 0.26438188494492043,\n\
\ \"mc1_stderr\": 0.01543821111952251,\n \"mc2\": 0.4415086011559294,\n\
\ \"mc2_stderr\": 0.01461283872125848\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.2354948805460751,\n \"acc_stderr\": 0.012399451855004755,\n\
\ \"acc_norm\": 0.2696245733788396,\n \"acc_norm_stderr\": 0.01296804068686915\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.3207528380800637,\n\
\ \"acc_stderr\": 0.004658120152230824,\n \"acc_norm\": 0.3897629954192392,\n\
\ \"acc_norm_stderr\": 0.004866997110388195\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816503,\n \
\ \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816503\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.23703703703703705,\n\
\ \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.23703703703703705,\n\
\ \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.32894736842105265,\n \"acc_stderr\": 0.03823428969926604,\n\
\ \"acc_norm\": 0.32894736842105265,\n \"acc_norm_stderr\": 0.03823428969926604\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.2,\n\
\ \"acc_stderr\": 0.04020151261036844,\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.04020151261036844\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.2943396226415094,\n \"acc_stderr\": 0.028049186315695245,\n\
\ \"acc_norm\": 0.2943396226415094,\n \"acc_norm_stderr\": 0.028049186315695245\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2916666666666667,\n\
\ \"acc_stderr\": 0.03800968060554858,\n \"acc_norm\": 0.2916666666666667,\n\
\ \"acc_norm_stderr\": 0.03800968060554858\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\"\
: 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23699421965317918,\n\
\ \"acc_stderr\": 0.03242414757483098,\n \"acc_norm\": 0.23699421965317918,\n\
\ \"acc_norm_stderr\": 0.03242414757483098\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n\
\ \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.2723404255319149,\n \"acc_stderr\": 0.029101290698386715,\n\
\ \"acc_norm\": 0.2723404255319149,\n \"acc_norm_stderr\": 0.029101290698386715\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n\
\ \"acc_stderr\": 0.040969851398436716,\n \"acc_norm\": 0.2543859649122807,\n\
\ \"acc_norm_stderr\": 0.040969851398436716\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.21379310344827587,\n \"acc_stderr\": 0.034165204477475494,\n\
\ \"acc_norm\": 0.21379310344827587,\n \"acc_norm_stderr\": 0.034165204477475494\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.25132275132275134,\n \"acc_stderr\": 0.022340482339643898,\n \"\
acc_norm\": 0.25132275132275134,\n \"acc_norm_stderr\": 0.022340482339643898\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.31746031746031744,\n\
\ \"acc_stderr\": 0.04163453031302859,\n \"acc_norm\": 0.31746031746031744,\n\
\ \"acc_norm_stderr\": 0.04163453031302859\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3161290322580645,\n\
\ \"acc_stderr\": 0.02645087448904277,\n \"acc_norm\": 0.3161290322580645,\n\
\ \"acc_norm_stderr\": 0.02645087448904277\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.32019704433497537,\n \"acc_stderr\": 0.032826493853041504,\n\
\ \"acc_norm\": 0.32019704433497537,\n \"acc_norm_stderr\": 0.032826493853041504\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\"\
: 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.2545454545454545,\n \"acc_stderr\": 0.03401506715249039,\n\
\ \"acc_norm\": 0.2545454545454545,\n \"acc_norm_stderr\": 0.03401506715249039\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"\
acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.36787564766839376,\n \"acc_stderr\": 0.03480175668466036,\n\
\ \"acc_norm\": 0.36787564766839376,\n \"acc_norm_stderr\": 0.03480175668466036\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.024433016466052455,\n\
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.024433016466052455\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712163,\n \
\ \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712163\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.3487394957983193,\n \"acc_stderr\": 0.03095663632856655,\n \
\ \"acc_norm\": 0.3487394957983193,\n \"acc_norm_stderr\": 0.03095663632856655\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.3486238532110092,\n \"acc_stderr\": 0.020431254090714328,\n \"\
acc_norm\": 0.3486238532110092,\n \"acc_norm_stderr\": 0.020431254090714328\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\"\
: 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n\
\ \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n\
\ \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \
\ \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\"\
: {\n \"acc\": 0.1940928270042194,\n \"acc_stderr\": 0.025744902532290916,\n\
\ \"acc_norm\": 0.1940928270042194,\n \"acc_norm_stderr\": 0.025744902532290916\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.11659192825112108,\n\
\ \"acc_stderr\": 0.02153963981624447,\n \"acc_norm\": 0.11659192825112108,\n\
\ \"acc_norm_stderr\": 0.02153963981624447\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.3053435114503817,\n \"acc_stderr\": 0.04039314978724561,\n\
\ \"acc_norm\": 0.3053435114503817,\n \"acc_norm_stderr\": 0.04039314978724561\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.18181818181818182,\n \"acc_stderr\": 0.035208939510976554,\n \"\
acc_norm\": 0.18181818181818182,\n \"acc_norm_stderr\": 0.035208939510976554\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.21296296296296297,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.1901840490797546,\n \"acc_stderr\": 0.030833491146281214,\n\
\ \"acc_norm\": 0.1901840490797546,\n \"acc_norm_stderr\": 0.030833491146281214\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.16071428571428573,\n\
\ \"acc_stderr\": 0.03485946096475741,\n \"acc_norm\": 0.16071428571428573,\n\
\ \"acc_norm_stderr\": 0.03485946096475741\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.3592233009708738,\n \"acc_stderr\": 0.04750458399041692,\n\
\ \"acc_norm\": 0.3592233009708738,\n \"acc_norm_stderr\": 0.04750458399041692\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19658119658119658,\n\
\ \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.19658119658119658,\n\
\ \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2222222222222222,\n\
\ \"acc_stderr\": 0.014866821664709593,\n \"acc_norm\": 0.2222222222222222,\n\
\ \"acc_norm_stderr\": 0.014866821664709593\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2514450867052023,\n \"acc_stderr\": 0.02335736578587404,\n\
\ \"acc_norm\": 0.2514450867052023,\n \"acc_norm_stderr\": 0.02335736578587404\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2435754189944134,\n\
\ \"acc_stderr\": 0.014355911964767864,\n \"acc_norm\": 0.2435754189944134,\n\
\ \"acc_norm_stderr\": 0.014355911964767864\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.2908496732026144,\n \"acc_stderr\": 0.026004800363952113,\n\
\ \"acc_norm\": 0.2908496732026144,\n \"acc_norm_stderr\": 0.026004800363952113\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24437299035369775,\n\
\ \"acc_stderr\": 0.024406162094668882,\n \"acc_norm\": 0.24437299035369775,\n\
\ \"acc_norm_stderr\": 0.024406162094668882\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.22530864197530864,\n \"acc_stderr\": 0.023246202647819746,\n\
\ \"acc_norm\": 0.22530864197530864,\n \"acc_norm_stderr\": 0.023246202647819746\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2624113475177305,\n \"acc_stderr\": 0.026244920349843014,\n \
\ \"acc_norm\": 0.2624113475177305,\n \"acc_norm_stderr\": 0.026244920349843014\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.26401564537157757,\n\
\ \"acc_stderr\": 0.011258435537723821,\n \"acc_norm\": 0.26401564537157757,\n\
\ \"acc_norm_stderr\": 0.011258435537723821\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4485294117647059,\n \"acc_stderr\": 0.030211479609121593,\n\
\ \"acc_norm\": 0.4485294117647059,\n \"acc_norm_stderr\": 0.030211479609121593\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.21895424836601307,\n \"acc_stderr\": 0.016729937565537544,\n \
\ \"acc_norm\": 0.21895424836601307,\n \"acc_norm_stderr\": 0.016729937565537544\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2909090909090909,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.2909090909090909,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.39591836734693875,\n \"acc_stderr\": 0.03130802899065686,\n\
\ \"acc_norm\": 0.39591836734693875,\n \"acc_norm_stderr\": 0.03130802899065686\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2736318407960199,\n\
\ \"acc_stderr\": 0.03152439186555401,\n \"acc_norm\": 0.2736318407960199,\n\
\ \"acc_norm_stderr\": 0.03152439186555401\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2469879518072289,\n\
\ \"acc_stderr\": 0.03357351982064537,\n \"acc_norm\": 0.2469879518072289,\n\
\ \"acc_norm_stderr\": 0.03357351982064537\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.03301405946987249,\n\
\ \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.03301405946987249\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26438188494492043,\n\
\ \"mc1_stderr\": 0.01543821111952251,\n \"mc2\": 0.4415086011559294,\n\
\ \"mc2_stderr\": 0.01461283872125848\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5382794001578532,\n \"acc_stderr\": 0.014011242594964123\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n }\n}\n```"
repo_url: https://huggingface.co/Sharathhebbar24/SSH_355M
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|arc:challenge|25_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|gsm8k|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hellaswag|10_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T16-37-52.949770.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-10T16-37-52.949770.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- '**/details_harness|winogrande|5_2024-02-10T16-37-52.949770.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-10T16-37-52.949770.parquet'
- config_name: results
data_files:
- split: 2024_02_10T16_37_52.949770
path:
- results_2024-02-10T16-37-52.949770.parquet
- split: latest
path:
- results_2024-02-10T16-37-52.949770.parquet
---
# Dataset Card for Evaluation run of Sharathhebbar24/SSH_355M
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Sharathhebbar24/SSH_355M](https://huggingface.co/Sharathhebbar24/SSH_355M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Sharathhebbar24__SSH_355M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-10T16:37:52.949770](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__SSH_355M/blob/main/results_2024-02-10T16-37-52.949770.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2757917484580653,
"acc_stderr": 0.031327907514240604,
"acc_norm": 0.27776537467722157,
"acc_norm_stderr": 0.032165569179046345,
"mc1": 0.26438188494492043,
"mc1_stderr": 0.01543821111952251,
"mc2": 0.4415086011559294,
"mc2_stderr": 0.01461283872125848
},
"harness|arc:challenge|25": {
"acc": 0.2354948805460751,
"acc_stderr": 0.012399451855004755,
"acc_norm": 0.2696245733788396,
"acc_norm_stderr": 0.01296804068686915
},
"harness|hellaswag|10": {
"acc": 0.3207528380800637,
"acc_stderr": 0.004658120152230824,
"acc_norm": 0.3897629954192392,
"acc_norm_stderr": 0.004866997110388195
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816503,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816503
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.23703703703703705,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.23703703703703705,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.32894736842105265,
"acc_stderr": 0.03823428969926604,
"acc_norm": 0.32894736842105265,
"acc_norm_stderr": 0.03823428969926604
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036844,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036844
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2943396226415094,
"acc_stderr": 0.028049186315695245,
"acc_norm": 0.2943396226415094,
"acc_norm_stderr": 0.028049186315695245
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2916666666666667,
"acc_stderr": 0.03800968060554858,
"acc_norm": 0.2916666666666667,
"acc_norm_stderr": 0.03800968060554858
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.23699421965317918,
"acc_stderr": 0.03242414757483098,
"acc_norm": 0.23699421965317918,
"acc_norm_stderr": 0.03242414757483098
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.2723404255319149,
"acc_stderr": 0.029101290698386715,
"acc_norm": 0.2723404255319149,
"acc_norm_stderr": 0.029101290698386715
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436716,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436716
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.21379310344827587,
"acc_stderr": 0.034165204477475494,
"acc_norm": 0.21379310344827587,
"acc_norm_stderr": 0.034165204477475494
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.25132275132275134,
"acc_stderr": 0.022340482339643898,
"acc_norm": 0.25132275132275134,
"acc_norm_stderr": 0.022340482339643898
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.31746031746031744,
"acc_stderr": 0.04163453031302859,
"acc_norm": 0.31746031746031744,
"acc_norm_stderr": 0.04163453031302859
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3161290322580645,
"acc_stderr": 0.02645087448904277,
"acc_norm": 0.3161290322580645,
"acc_norm_stderr": 0.02645087448904277
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.32019704433497537,
"acc_stderr": 0.032826493853041504,
"acc_norm": 0.32019704433497537,
"acc_norm_stderr": 0.032826493853041504
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2545454545454545,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.2545454545454545,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35353535353535354,
"acc_stderr": 0.03406086723547153,
"acc_norm": 0.35353535353535354,
"acc_norm_stderr": 0.03406086723547153
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.36787564766839376,
"acc_stderr": 0.03480175668466036,
"acc_norm": 0.36787564766839376,
"acc_norm_stderr": 0.03480175668466036
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.024433016466052455,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.024433016466052455
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712163,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712163
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.3487394957983193,
"acc_stderr": 0.03095663632856655,
"acc_norm": 0.3487394957983193,
"acc_norm_stderr": 0.03095663632856655
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3486238532110092,
"acc_stderr": 0.020431254090714328,
"acc_norm": 0.3486238532110092,
"acc_norm_stderr": 0.020431254090714328
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.1940928270042194,
"acc_stderr": 0.025744902532290916,
"acc_norm": 0.1940928270042194,
"acc_norm_stderr": 0.025744902532290916
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.11659192825112108,
"acc_stderr": 0.02153963981624447,
"acc_norm": 0.11659192825112108,
"acc_norm_stderr": 0.02153963981624447
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.3053435114503817,
"acc_stderr": 0.04039314978724561,
"acc_norm": 0.3053435114503817,
"acc_norm_stderr": 0.04039314978724561
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.18181818181818182,
"acc_stderr": 0.035208939510976554,
"acc_norm": 0.18181818181818182,
"acc_norm_stderr": 0.035208939510976554
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.1901840490797546,
"acc_stderr": 0.030833491146281214,
"acc_norm": 0.1901840490797546,
"acc_norm_stderr": 0.030833491146281214
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.16071428571428573,
"acc_stderr": 0.03485946096475741,
"acc_norm": 0.16071428571428573,
"acc_norm_stderr": 0.03485946096475741
},
"harness|hendrycksTest-management|5": {
"acc": 0.3592233009708738,
"acc_stderr": 0.04750458399041692,
"acc_norm": 0.3592233009708738,
"acc_norm_stderr": 0.04750458399041692
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19658119658119658,
"acc_stderr": 0.02603538609895129,
"acc_norm": 0.19658119658119658,
"acc_norm_stderr": 0.02603538609895129
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.014866821664709593,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.014866821664709593
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2514450867052023,
"acc_stderr": 0.02335736578587404,
"acc_norm": 0.2514450867052023,
"acc_norm_stderr": 0.02335736578587404
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2435754189944134,
"acc_stderr": 0.014355911964767864,
"acc_norm": 0.2435754189944134,
"acc_norm_stderr": 0.014355911964767864
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2908496732026144,
"acc_stderr": 0.026004800363952113,
"acc_norm": 0.2908496732026144,
"acc_norm_stderr": 0.026004800363952113
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24437299035369775,
"acc_stderr": 0.024406162094668882,
"acc_norm": 0.24437299035369775,
"acc_norm_stderr": 0.024406162094668882
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.22530864197530864,
"acc_stderr": 0.023246202647819746,
"acc_norm": 0.22530864197530864,
"acc_norm_stderr": 0.023246202647819746
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2624113475177305,
"acc_stderr": 0.026244920349843014,
"acc_norm": 0.2624113475177305,
"acc_norm_stderr": 0.026244920349843014
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.26401564537157757,
"acc_stderr": 0.011258435537723821,
"acc_norm": 0.26401564537157757,
"acc_norm_stderr": 0.011258435537723821
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4485294117647059,
"acc_stderr": 0.030211479609121593,
"acc_norm": 0.4485294117647059,
"acc_norm_stderr": 0.030211479609121593
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.21895424836601307,
"acc_stderr": 0.016729937565537544,
"acc_norm": 0.21895424836601307,
"acc_norm_stderr": 0.016729937565537544
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.2909090909090909,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.2909090909090909,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.39591836734693875,
"acc_stderr": 0.03130802899065686,
"acc_norm": 0.39591836734693875,
"acc_norm_stderr": 0.03130802899065686
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.2736318407960199,
"acc_stderr": 0.03152439186555401,
"acc_norm": 0.2736318407960199,
"acc_norm_stderr": 0.03152439186555401
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2469879518072289,
"acc_stderr": 0.03357351982064537,
"acc_norm": 0.2469879518072289,
"acc_norm_stderr": 0.03357351982064537
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.03301405946987249,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.03301405946987249
},
"harness|truthfulqa:mc|0": {
"mc1": 0.26438188494492043,
"mc1_stderr": 0.01543821111952251,
"mc2": 0.4415086011559294,
"mc2_stderr": 0.01461283872125848
},
"harness|winogrande|5": {
"acc": 0.5382794001578532,
"acc_stderr": 0.014011242594964123
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/shinshuu_maru_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of shinshuu_maru/神州丸 (Kantai Collection)
This is the dataset of shinshuu_maru/神州丸 (Kantai Collection), containing 406 images and their tags.
The core tags of this character are `brown_hair, long_hair, braid, twin_braids, brown_eyes, breasts, large_breasts, ribbon, red_ribbon, hair_ribbon, ahoge`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 406 | 447.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinshuu_maru_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 406 | 264.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinshuu_maru_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 961 | 574.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinshuu_maru_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 406 | 403.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinshuu_maru_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 961 | 796.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shinshuu_maru_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shinshuu_maru_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 30 |  |  |  |  |  | 1girl, black_capelet, black_dress, hooded_capelet, simple_background, solo, hood_up, looking_at_viewer, upper_body, white_background, long_sleeves, blush |
| 1 | 10 |  |  |  |  |  | 1girl, black_capelet, black_dress, brown_belt, hooded_capelet, long_sleeves, pleated_dress, solo, cowboy_shot, looking_at_viewer, blush, hood_up, simple_background, white_background |
| 2 | 8 |  |  |  |  |  | 1girl, black_capelet, black_dress, black_footwear, boots, brown_belt, hood_up, hooded_capelet, long_sleeves, simple_background, solo, pleated_dress, white_background, full_body, looking_at_viewer, open_mouth, wariza |
| 3 | 10 |  |  |  |  |  | 1girl, black_dress, blush, hooded_capelet, black_capelet, cleavage, long_sleeves, solo, brown_belt, simple_background, torn_clothes, white_background, white_bra, looking_at_viewer, open_mouth, pleated_dress, bangs |
| 4 | 14 |  |  |  |  |  | 1girl, solo, white_panties, cleavage, hood_up, hooded_capelet, white_bra, black_capelet, simple_background, white_background, blush, looking_at_viewer, navel, cowboy_shot, dated, one-hour_drawing_challenge, twitter_username |
| 5 | 5 |  |  |  |  |  | 1girl, blush, fake_animal_ears, playboy_bunny, rabbit_ears, solo, black_capelet, black_leotard, cleavage, hooded_capelet, simple_background, strapless_leotard, wrist_cuffs, adapted_costume, white_background, black_footwear, detached_collar, fishnet_pantyhose, hood_up, looking_at_viewer, rabbit_tail |
| 6 | 6 |  |  |  |  |  | 1boy, 1girl, black_capelet, blush, hetero, hood_up, hooded_capelet, penis, solo_focus, nipples, bangs, paizuri, simple_background, censored, cum, grey_background, open_mouth |
| 7 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, solo, alternate_costume, pleated_skirt, simple_background, white_background, white_shirt, artist_logo, blush, dated, bag, blue_skirt, cowboy_shot, one-hour_drawing_challenge, sailor_collar, serafuku, short_sleeves, white_panties |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_capelet | black_dress | hooded_capelet | simple_background | solo | hood_up | looking_at_viewer | upper_body | white_background | long_sleeves | blush | brown_belt | pleated_dress | cowboy_shot | black_footwear | boots | full_body | open_mouth | wariza | cleavage | torn_clothes | white_bra | bangs | white_panties | navel | dated | one-hour_drawing_challenge | twitter_username | fake_animal_ears | playboy_bunny | rabbit_ears | black_leotard | strapless_leotard | wrist_cuffs | adapted_costume | detached_collar | fishnet_pantyhose | rabbit_tail | 1boy | hetero | penis | solo_focus | nipples | paizuri | censored | cum | grey_background | alternate_costume | pleated_skirt | white_shirt | artist_logo | bag | blue_skirt | sailor_collar | serafuku | short_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------------|:--------------|:-----------------|:--------------------|:-------|:----------|:--------------------|:-------------|:-------------------|:---------------|:--------|:-------------|:----------------|:--------------|:-----------------|:--------|:------------|:-------------|:---------|:-----------|:---------------|:------------|:--------|:----------------|:--------|:--------|:-----------------------------|:-------------------|:-------------------|:----------------|:--------------|:----------------|:--------------------|:--------------|:------------------|:------------------|:--------------------|:--------------|:-------|:---------|:--------|:-------------|:----------|:----------|:-----------|:------|:------------------|:--------------------|:----------------|:--------------|:--------------|:------|:-------------|:----------------|:-----------|:----------------|
| 0 | 30 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | | X | X | | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | X | X | X | X | | X | | X | X | X | X | X | | | | | X | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | | X | X | X | X | X | | X | | X | | | X | | | | | | X | | X | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | X | X | X | X | | X | | X | | | | X | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | | X | X | | X | | | | | X | | | | | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | |
| 7 | 6 |  |  |  |  |  | X | | | | X | X | | X | | X | | X | | | X | | | | | | | | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X |
|
johndoe1100100101/nsfw_chat | ---
license: apache-2.0
---
|
eperim/base_to_eval | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 239549
num_examples: 200
download_size: 148777
dataset_size: 239549
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tyzhu/squad_qa_baseline_v5_full_recite_full_passage_random_permute_rerun_8 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4369231.0
num_examples: 2385
- name: validation
num_bytes: 573308
num_examples: 300
download_size: 1012407
dataset_size: 4942539.0
---
# Dataset Card for "squad_qa_baseline_v5_full_recite_full_passage_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
v-xchen-v/truthfulqa_true | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
AlexWortega/FicBook | ---
license: mit
language:
- ru
--- |
enoahjr/twitter_dataset_1713206184 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 347216
num_examples: 915
download_size: 163118
dataset_size: 347216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mdass/gpt_gen_desc_image_only_logos | ---
dataset_info:
features:
- name: image
dtype: image
- name: description
dtype: string
splits:
- name: train
num_bytes: 2618263.0
num_examples: 100
download_size: 2588112
dataset_size: 2618263.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amaye15/Products-10k | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype:
class_label:
names:
'0': Barcode
'1': Invoice
'2': Object
'3': Receipt
'4': Non-Object
splits:
- name: train
num_bytes: 14174964689.855999
num_examples: 137904
- name: test
num_bytes: 3543740793.2279997
num_examples: 34476
download_size: 17609512642
dataset_size: 17718705483.084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
result-kand2-sdxl-wuerst-karlo/4390ae17 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 175
num_examples: 10
download_size: 1353
dataset_size: 175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "4390ae17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
btt-mining-coalation/open_web_random_5000 | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: reward_dpo
dtype: float64
splits:
- name: train
num_bytes: 30649367
num_examples: 5000
download_size: 18002442
dataset_size: 30649367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "open_web_random_5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MU-NLPC/Calc-ape210k_selftrain | ---
dataset_info:
config_name: 0-50k
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_chinese
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: template
dtype: string
- name: prediction
sequence: string
- name: model_checkpoint
dtype: string
- name: pred_result
sequence: string
- name: is_correct
sequence: bool
splits:
- name: train
num_bytes: 315968226
num_examples: 50000
download_size: 94681038
dataset_size: 315968226
configs:
- config_name: 0-50k
data_files:
- split: train
path: 0-50k/train-*
---
# Dataset Card for "Calc-ape210k_selftrain"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tr416/small_alpaca_bc_data | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25568053.244853117
num_examples: 11833
download_size: 13090982
dataset_size: 25568053.244853117
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "small_alpaca_bc_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mdass/236_rand__images | ---
dataset_info:
features:
- name: image
dtype: image
- name: name
dtype: string
splits:
- name: train
num_bytes: 1996549.0
num_examples: 100
download_size: 1991185
dataset_size: 1996549.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "236_rand__images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1713100519 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 10071
num_examples: 27
download_size: 12806
dataset_size: 10071
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1713100519"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gokuls/processed_train_coco | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 60520900000
num_examples: 100000
download_size: 18447379186
dataset_size: 60520900000
---
# Dataset Card for "processed_train_coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jahb57/gpt2_embeddings_test | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: last_hidden_state
sequence:
sequence: float64
splits:
- name: train
num_bytes: 2644216
num_examples: 10
download_size: 2337581
dataset_size: 2644216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Helsinki-NLP/opus_infopankki | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- en
- es
- et
- fa
- fi
- fr
- ru
- so
- sv
- tr
- zh
license: cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
pretty_name: OpusInfopankki
config_names:
- ar-en
- ar-es
- ar-et
- ar-fa
- ar-fi
- ar-fr
- ar-ru
- ar-so
- ar-sv
- ar-tr
- ar-zh
- en-es
- en-et
- en-fa
- en-fi
- en-fr
- en-ru
- en-so
- en-sv
- en-tr
- en-zh
- es-et
- es-fa
- es-fi
- es-fr
- es-ru
- es-so
- es-sv
- es-tr
- es-zh
- et-fa
- et-fi
- et-fr
- et-ru
- et-so
- et-sv
- et-tr
- et-zh
- fa-fi
- fa-fr
- fa-ru
- fa-so
- fa-sv
- fa-tr
- fa-zh
- fi-fr
- fi-ru
- fi-so
- fi-sv
- fi-tr
- fi-zh
- fr-ru
- fr-so
- fr-sv
- fr-tr
- fr-zh
- ru-so
- ru-sv
- ru-tr
- ru-zh
- so-sv
- so-tr
- so-zh
- sv-tr
- sv-zh
- tr-zh
dataset_info:
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 10133337
num_examples: 50769
download_size: 2775475
dataset_size: 10133337
- config_name: ar-es
features:
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 8665355
num_examples: 40514
download_size: 2366264
dataset_size: 8665355
- config_name: ar-et
features:
- name: translation
dtype:
translation:
languages:
- ar
- et
splits:
- name: train
num_bytes: 9087555
num_examples: 46573
download_size: 2475165
dataset_size: 9087555
- config_name: ar-fa
features:
- name: translation
dtype:
translation:
languages:
- ar
- fa
splits:
- name: train
num_bytes: 12220196
num_examples: 47007
download_size: 3017006
dataset_size: 12220196
- config_name: ar-fi
features:
- name: translation
dtype:
translation:
languages:
- ar
- fi
splits:
- name: train
num_bytes: 9524265
num_examples: 49608
download_size: 2704144
dataset_size: 9524265
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 8877629
num_examples: 41061
download_size: 2434048
dataset_size: 8877629
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 13648194
num_examples: 50286
download_size: 3393441
dataset_size: 13648194
- config_name: ar-so
features:
- name: translation
dtype:
translation:
languages:
- ar
- so
splits:
- name: train
num_bytes: 9555548
num_examples: 44736
download_size: 2614055
dataset_size: 9555548
- config_name: ar-sv
features:
- name: translation
dtype:
translation:
languages:
- ar
- sv
splits:
- name: train
num_bytes: 8585135
num_examples: 43085
download_size: 2312217
dataset_size: 8585135
- config_name: ar-tr
features:
- name: translation
dtype:
translation:
languages:
- ar
- tr
splits:
- name: train
num_bytes: 8691077
num_examples: 41710
download_size: 2417172
dataset_size: 8691077
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 5973634
num_examples: 29943
download_size: 1523722
dataset_size: 5973634
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 6933983
num_examples: 42657
download_size: 2108422
dataset_size: 6933983
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 8211562
num_examples: 58410
download_size: 2473732
dataset_size: 8211562
- config_name: en-fa
features:
- name: translation
dtype:
translation:
languages:
- en
- fa
splits:
- name: train
num_bytes: 10166305
num_examples: 48277
download_size: 2696051
dataset_size: 10166305
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 10913601
num_examples: 84645
download_size: 3183398
dataset_size: 10913601
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 8903183
num_examples: 56120
download_size: 2522185
dataset_size: 8903183
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 15918195
num_examples: 75305
download_size: 3834067
dataset_size: 15918195
- config_name: en-so
features:
- name: translation
dtype:
translation:
languages:
- en
- so
splits:
- name: train
num_bytes: 7602290
num_examples: 47220
download_size: 2317274
dataset_size: 7602290
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 7410975
num_examples: 51749
download_size: 2214196
dataset_size: 7410975
- config_name: en-tr
features:
- name: translation
dtype:
translation:
languages:
- en
- tr
splits:
- name: train
num_bytes: 6929154
num_examples: 44030
download_size: 2158897
dataset_size: 6929154
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 4666963
num_examples: 29907
download_size: 1313255
dataset_size: 4666963
- config_name: es-et
features:
- name: translation
dtype:
translation:
languages:
- es
- et
splits:
- name: train
num_bytes: 6611956
num_examples: 42342
download_size: 2109076
dataset_size: 6611956
- config_name: es-fa
features:
- name: translation
dtype:
translation:
languages:
- es
- fa
splits:
- name: train
num_bytes: 9338210
num_examples: 41218
download_size: 2535729
dataset_size: 9338210
- config_name: es-fi
features:
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 6436298
num_examples: 41479
download_size: 2052254
dataset_size: 6436298
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 7368724
num_examples: 41940
download_size: 2234633
dataset_size: 7368724
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 9844937
num_examples: 41061
download_size: 2638368
dataset_size: 9844937
- config_name: es-so
features:
- name: translation
dtype:
translation:
languages:
- es
- so
splits:
- name: train
num_bytes: 7257038
num_examples: 41752
download_size: 2261851
dataset_size: 7257038
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 6650652
num_examples: 41256
download_size: 2027874
dataset_size: 6650652
- config_name: es-tr
features:
- name: translation
dtype:
translation:
languages:
- es
- tr
splits:
- name: train
num_bytes: 7144065
num_examples: 42191
download_size: 2206245
dataset_size: 7144065
- config_name: es-zh
features:
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 4358751
num_examples: 26004
download_size: 1176333
dataset_size: 4358751
- config_name: et-fa
features:
- name: translation
dtype:
translation:
languages:
- et
- fa
splits:
- name: train
num_bytes: 9795996
num_examples: 47633
download_size: 2680445
dataset_size: 9795996
- config_name: et-fi
features:
- name: translation
dtype:
translation:
languages:
- et
- fi
splits:
- name: train
num_bytes: 7656989
num_examples: 57353
download_size: 2419554
dataset_size: 7656989
- config_name: et-fr
features:
- name: translation
dtype:
translation:
languages:
- et
- fr
splits:
- name: train
num_bytes: 7012430
num_examples: 44753
download_size: 2193006
dataset_size: 7012430
- config_name: et-ru
features:
- name: translation
dtype:
translation:
languages:
- et
- ru
splits:
- name: train
num_bytes: 12001391
num_examples: 55901
download_size: 3160673
dataset_size: 12001391
- config_name: et-so
features:
- name: translation
dtype:
translation:
languages:
- et
- so
splits:
- name: train
num_bytes: 7260797
num_examples: 46933
download_size: 2319211
dataset_size: 7260797
- config_name: et-sv
features:
- name: translation
dtype:
translation:
languages:
- et
- sv
splits:
- name: train
num_bytes: 6523041
num_examples: 46775
download_size: 2074448
dataset_size: 6523041
- config_name: et-tr
features:
- name: translation
dtype:
translation:
languages:
- et
- tr
splits:
- name: train
num_bytes: 6621665
num_examples: 43729
download_size: 2123880
dataset_size: 6621665
- config_name: et-zh
features:
- name: translation
dtype:
translation:
languages:
- et
- zh
splits:
- name: train
num_bytes: 4305273
num_examples: 27826
download_size: 1201275
dataset_size: 4305273
- config_name: fa-fi
features:
- name: translation
dtype:
translation:
languages:
- fa
- fi
splits:
- name: train
num_bytes: 9579257
num_examples: 46924
download_size: 2618699
dataset_size: 9579257
- config_name: fa-fr
features:
- name: translation
dtype:
translation:
languages:
- fa
- fr
splits:
- name: train
num_bytes: 9574254
num_examples: 41975
download_size: 2588917
dataset_size: 9574254
- config_name: fa-ru
features:
- name: translation
dtype:
translation:
languages:
- fa
- ru
splits:
- name: train
num_bytes: 13544451
num_examples: 47814
download_size: 3351553
dataset_size: 13544451
- config_name: fa-so
features:
- name: translation
dtype:
translation:
languages:
- fa
- so
splits:
- name: train
num_bytes: 10254723
num_examples: 45571
download_size: 2813443
dataset_size: 10254723
- config_name: fa-sv
features:
- name: translation
dtype:
translation:
languages:
- fa
- sv
splits:
- name: train
num_bytes: 9153752
num_examples: 43510
download_size: 2512908
dataset_size: 9153752
- config_name: fa-tr
features:
- name: translation
dtype:
translation:
languages:
- fa
- tr
splits:
- name: train
num_bytes: 9393209
num_examples: 42708
download_size: 2599794
dataset_size: 9393209
- config_name: fa-zh
features:
- name: translation
dtype:
translation:
languages:
- fa
- zh
splits:
- name: train
num_bytes: 5792439
num_examples: 27748
download_size: 1413779
dataset_size: 5792439
- config_name: fi-fr
features:
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 8310851
num_examples: 55087
download_size: 2455971
dataset_size: 8310851
- config_name: fi-ru
features:
- name: translation
dtype:
translation:
languages:
- fi
- ru
splits:
- name: train
num_bytes: 15188168
num_examples: 74699
download_size: 3842831
dataset_size: 15188168
- config_name: fi-so
features:
- name: translation
dtype:
translation:
languages:
- fi
- so
splits:
- name: train
num_bytes: 7076221
num_examples: 46032
download_size: 2219872
dataset_size: 7076221
- config_name: fi-sv
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
splits:
- name: train
num_bytes: 6947224
num_examples: 51506
download_size: 2137629
dataset_size: 6947224
- config_name: fi-tr
features:
- name: translation
dtype:
translation:
languages:
- fi
- tr
splits:
- name: train
num_bytes: 6438716
num_examples: 42781
download_size: 2081615
dataset_size: 6438716
- config_name: fi-zh
features:
- name: translation
dtype:
translation:
languages:
- fi
- zh
splits:
- name: train
num_bytes: 4434168
num_examples: 29503
download_size: 1312557
dataset_size: 4434168
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 12564196
num_examples: 54213
download_size: 3159587
dataset_size: 12564196
- config_name: fr-so
features:
- name: translation
dtype:
translation:
languages:
- fr
- so
splits:
- name: train
num_bytes: 7473559
num_examples: 42652
download_size: 2344399
dataset_size: 7473559
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 7027563
num_examples: 43524
download_size: 2107653
dataset_size: 7027563
- config_name: fr-tr
features:
- name: translation
dtype:
translation:
languages:
- fr
- tr
splits:
- name: train
num_bytes: 7341078
num_examples: 43036
download_size: 2279611
dataset_size: 7341078
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 4525109
num_examples: 26654
download_size: 1211652
dataset_size: 4525109
- config_name: ru-so
features:
- name: translation
dtype:
translation:
languages:
- ru
- so
splits:
- name: train
num_bytes: 10809193
num_examples: 45430
download_size: 2932790
dataset_size: 10809193
- config_name: ru-sv
features:
- name: translation
dtype:
translation:
languages:
- ru
- sv
splits:
- name: train
num_bytes: 10517433
num_examples: 47672
download_size: 2724280
dataset_size: 10517433
- config_name: ru-tr
features:
- name: translation
dtype:
translation:
languages:
- ru
- tr
splits:
- name: train
num_bytes: 9930592
num_examples: 42587
download_size: 2727600
dataset_size: 9930592
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 6417808
num_examples: 29523
download_size: 1582749
dataset_size: 6417808
- config_name: so-sv
features:
- name: translation
dtype:
translation:
languages:
- so
- sv
splits:
- name: train
num_bytes: 6763754
num_examples: 42384
download_size: 2098877
dataset_size: 6763754
- config_name: so-tr
features:
- name: translation
dtype:
translation:
languages:
- so
- tr
splits:
- name: train
num_bytes: 7272349
num_examples: 43242
download_size: 2279999
dataset_size: 7272349
- config_name: so-zh
features:
- name: translation
dtype:
translation:
languages:
- so
- zh
splits:
- name: train
num_bytes: 4535955
num_examples: 27090
download_size: 1267321
dataset_size: 4535955
- config_name: sv-tr
features:
- name: translation
dtype:
translation:
languages:
- sv
- tr
splits:
- name: train
num_bytes: 6637744
num_examples: 42555
download_size: 2045078
dataset_size: 6637744
- config_name: sv-zh
features:
- name: translation
dtype:
translation:
languages:
- sv
- zh
splits:
- name: train
num_bytes: 4216405
num_examples: 26898
download_size: 1149609
dataset_size: 4216405
- config_name: tr-zh
features:
- name: translation
dtype:
translation:
languages:
- tr
- zh
splits:
- name: train
num_bytes: 4494071
num_examples: 27323
download_size: 1221951
dataset_size: 4494071
configs:
- config_name: ar-en
data_files:
- split: train
path: ar-en/train-*
- config_name: ar-es
data_files:
- split: train
path: ar-es/train-*
- config_name: ar-et
data_files:
- split: train
path: ar-et/train-*
- config_name: ar-fa
data_files:
- split: train
path: ar-fa/train-*
- config_name: ar-fi
data_files:
- split: train
path: ar-fi/train-*
- config_name: ar-fr
data_files:
- split: train
path: ar-fr/train-*
- config_name: ar-ru
data_files:
- split: train
path: ar-ru/train-*
- config_name: ar-so
data_files:
- split: train
path: ar-so/train-*
- config_name: ar-sv
data_files:
- split: train
path: ar-sv/train-*
- config_name: ar-tr
data_files:
- split: train
path: ar-tr/train-*
- config_name: ar-zh
data_files:
- split: train
path: ar-zh/train-*
- config_name: en-es
data_files:
- split: train
path: en-es/train-*
- config_name: en-et
data_files:
- split: train
path: en-et/train-*
- config_name: en-fa
data_files:
- split: train
path: en-fa/train-*
- config_name: en-fi
data_files:
- split: train
path: en-fi/train-*
- config_name: en-fr
data_files:
- split: train
path: en-fr/train-*
- config_name: en-ru
data_files:
- split: train
path: en-ru/train-*
- config_name: en-so
data_files:
- split: train
path: en-so/train-*
- config_name: en-sv
data_files:
- split: train
path: en-sv/train-*
- config_name: en-tr
data_files:
- split: train
path: en-tr/train-*
- config_name: en-zh
data_files:
- split: train
path: en-zh/train-*
- config_name: es-et
data_files:
- split: train
path: es-et/train-*
- config_name: es-fa
data_files:
- split: train
path: es-fa/train-*
- config_name: es-fi
data_files:
- split: train
path: es-fi/train-*
- config_name: es-fr
data_files:
- split: train
path: es-fr/train-*
- config_name: es-ru
data_files:
- split: train
path: es-ru/train-*
- config_name: es-so
data_files:
- split: train
path: es-so/train-*
- config_name: es-sv
data_files:
- split: train
path: es-sv/train-*
- config_name: es-tr
data_files:
- split: train
path: es-tr/train-*
- config_name: es-zh
data_files:
- split: train
path: es-zh/train-*
- config_name: et-fa
data_files:
- split: train
path: et-fa/train-*
- config_name: et-fi
data_files:
- split: train
path: et-fi/train-*
- config_name: et-fr
data_files:
- split: train
path: et-fr/train-*
- config_name: et-ru
data_files:
- split: train
path: et-ru/train-*
- config_name: et-so
data_files:
- split: train
path: et-so/train-*
- config_name: et-sv
data_files:
- split: train
path: et-sv/train-*
- config_name: et-tr
data_files:
- split: train
path: et-tr/train-*
- config_name: et-zh
data_files:
- split: train
path: et-zh/train-*
- config_name: fa-fi
data_files:
- split: train
path: fa-fi/train-*
- config_name: fa-fr
data_files:
- split: train
path: fa-fr/train-*
- config_name: fa-ru
data_files:
- split: train
path: fa-ru/train-*
- config_name: fa-so
data_files:
- split: train
path: fa-so/train-*
- config_name: fa-sv
data_files:
- split: train
path: fa-sv/train-*
- config_name: fa-tr
data_files:
- split: train
path: fa-tr/train-*
- config_name: fa-zh
data_files:
- split: train
path: fa-zh/train-*
- config_name: fi-fr
data_files:
- split: train
path: fi-fr/train-*
- config_name: fi-ru
data_files:
- split: train
path: fi-ru/train-*
- config_name: fi-so
data_files:
- split: train
path: fi-so/train-*
- config_name: fi-sv
data_files:
- split: train
path: fi-sv/train-*
- config_name: fi-tr
data_files:
- split: train
path: fi-tr/train-*
- config_name: fi-zh
data_files:
- split: train
path: fi-zh/train-*
- config_name: fr-ru
data_files:
- split: train
path: fr-ru/train-*
- config_name: fr-so
data_files:
- split: train
path: fr-so/train-*
- config_name: fr-sv
data_files:
- split: train
path: fr-sv/train-*
- config_name: fr-tr
data_files:
- split: train
path: fr-tr/train-*
- config_name: fr-zh
data_files:
- split: train
path: fr-zh/train-*
- config_name: ru-so
data_files:
- split: train
path: ru-so/train-*
- config_name: ru-sv
data_files:
- split: train
path: ru-sv/train-*
- config_name: ru-tr
data_files:
- split: train
path: ru-tr/train-*
- config_name: ru-zh
data_files:
- split: train
path: ru-zh/train-*
- config_name: so-sv
data_files:
- split: train
path: so-sv/train-*
- config_name: so-tr
data_files:
- split: train
path: so-tr/train-*
- config_name: so-zh
data_files:
- split: train
path: so-zh/train-*
- config_name: sv-tr
data_files:
- split: train
path: sv-tr/train-*
- config_name: sv-zh
data_files:
- split: train
path: sv-zh/train-*
- config_name: tr-zh
data_files:
- split: train
path: tr-zh/train-*
---
# Dataset Card for infopankki
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/infopankki/corpus/version/infopankki
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
A parallel corpus of 12 languages, 66 bitexts.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Source: http://www.infopankki.fi via the Open Data API
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Licensed under [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
If you use any part of the corpus in your own work, please cite the following article:
```
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
abstract = "This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
liuyanchen1015/MULTI_VALUE_rte_possessives_for_pre | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 606248
num_examples: 1492
- name: train
num_bytes: 544634
num_examples: 1311
download_size: 749037
dataset_size: 1150882
---
# Dataset Card for "MULTI_VALUE_rte_possessives_for_pre"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
harpreetsahota/gemma_vibe_check_cot | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
- name: DeciLM-7B-Instruct
dtype: string
- name: Gemma-7B-it
dtype: string
- name: cot_qa_DeciLM-7B-Instruct
struct:
- name: reasoning
dtype: string
- name: score
dtype: int64
- name: value
dtype: string
- name: cot_qa_Gemma-7B-it
struct:
- name: reasoning
dtype: string
- name: score
dtype: int64
- name: value
dtype: string
splits:
- name: train
num_bytes: 441429
num_examples: 100
download_size: 218439
dataset_size: 441429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/tang_keke_lovelivesuperstar | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tang_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!)
This is the dataset of tang_keke/唐可可/탕쿠쿠 (Love Live! Superstar!!), containing 500 images and their tags.
The core tags of this character are `short_hair, bangs, blue_eyes, grey_hair, ribbon, neck_ribbon, red_ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 736.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 354.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1273 | 821.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 621.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1273 | 1.28 GiB | [Download](https://huggingface.co/datasets/CyberHarem/tang_keke_lovelivesuperstar/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tang_keke_lovelivesuperstar',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, blue_jacket, grey_dress, long_sleeves, smile, solo, white_shirt, yuigaoka_school_uniform, collared_shirt, looking_at_viewer, open_jacket, pinafore_dress, white_background, simple_background, blush, open_mouth, breasts, multicolored_hair |
| 1 | 5 |  |  |  |  |  | 1girl, black_socks, blue_jacket, brown_footwear, grey_dress, light_brown_hair, loafers, long_sleeves, looking_at_viewer, open_jacket, pinafore_dress, shiny_hair, solo, white_background, yuigaoka_school_uniform, collared_shirt, full_body, kneehighs, smile, white_shirt, simple_background, blush, medium_breasts, multicolored_hair, open_mouth, sitting |
| 2 | 26 |  |  |  |  |  | 1girl, smile, solo, white_gloves, looking_at_viewer, elbow_gloves, hair_bow, open_mouth, blush, hairband, white_dress, brown_hair, pink_dress, pink_bow, puffy_short_sleeves |
| 3 | 24 |  |  |  |  |  | 1girl, solo, collarbone, looking_at_viewer, outdoors, smile, navel, blush, day, bracelet, cloud, blue_sky, ocean, sun_hat, hair_ornament, bikini_skirt, flower, blue_bikini, bow, choker, frilled_bikini, medium_breasts, open_mouth |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_jacket | grey_dress | long_sleeves | smile | solo | white_shirt | yuigaoka_school_uniform | collared_shirt | looking_at_viewer | open_jacket | pinafore_dress | white_background | simple_background | blush | open_mouth | breasts | multicolored_hair | black_socks | brown_footwear | light_brown_hair | loafers | shiny_hair | full_body | kneehighs | medium_breasts | sitting | white_gloves | elbow_gloves | hair_bow | hairband | white_dress | brown_hair | pink_dress | pink_bow | puffy_short_sleeves | collarbone | outdoors | navel | day | bracelet | cloud | blue_sky | ocean | sun_hat | hair_ornament | bikini_skirt | flower | blue_bikini | bow | choker | frilled_bikini |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:-------------|:---------------|:--------|:-------|:--------------|:--------------------------|:-----------------|:--------------------|:--------------|:-----------------|:-------------------|:--------------------|:--------|:-------------|:----------|:--------------------|:--------------|:-----------------|:-------------------|:----------|:-------------|:------------|:------------|:-----------------|:----------|:---------------|:---------------|:-----------|:-----------|:--------------|:-------------|:-------------|:-----------|:----------------------|:-------------|:-----------|:--------|:------|:-----------|:--------|:-----------|:--------|:----------|:----------------|:---------------|:---------|:--------------|:------|:---------|:-----------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 26 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | X | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 3 | 24 |  |  |  |  |  | X | | | | X | X | | | | X | | | | | X | X | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
habanoz/airoboros-3.1-no-mathjson-max-1k-chat-format | ---
dataset_info:
features:
- name: category
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 11937413
num_examples: 20180
download_size: 5699534
dataset_size: 11937413
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Copy of [habanoz/airoboros-3.1-no-mathjson-max-1k](https://huggingface.co/datasets/habanoz/airoboros-3.1-no-mathjson-max-1k) transformed to work with huggingface chat templates e.g. role(user|assistant), content.
Note that samples are limited to 1K length. |
cryptom/ceval-exam | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- multiple-choice
- question-answering
language:
- zh
pretty_name: C-Eval
size_categories:
- 10K<n<100K
---
C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Please visit our [website](https://cevalbenchmark.com/) and [GitHub](https://github.com/SJTU-LIT/ceval/tree/main) or check our [paper](https://arxiv.org/abs/2305.08322) for more details.
Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy. [How to submit?](https://github.com/SJTU-LIT/ceval/tree/main#how-to-submit)
### Load the data
```python
from datasets import load_dataset
dataset=load_dataset(r"ceval/ceval-exam",name="computer_network")
print(dataset['val'][0])
# {'id': 0, 'question': '使用位填充方法,以01111110为位首flag,数据为011011111111111111110010,求问传送时要添加几个0____', 'A': '1', 'B': '2', 'C': '3', 'D': '4', 'answer': 'C', 'explanation': ''}
```
More details on loading and using the data are at our [github page](https://github.com/SJTU-LIT/ceval#data).
Please cite our paper if you use our dataset.
```
@article{huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian},
journal={arXiv preprint arXiv:2305.08322},
year={2023}
}
```
|
gsh3729/sw_t1 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: tif
dtype: binary
- name: tfw
dtype: binary
splits:
- name: train
num_bytes: 396703104
num_examples: 30000
download_size: 393236076
dataset_size: 396703104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gart-labor/eclassTrainST | ---
dataset_info:
features:
- name: text
dtype: string
- name: entailment
dtype: string
- name: contradiction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 327174992
num_examples: 698880
- name: eval
num_bytes: 219201779
num_examples: 450912
download_size: 46751846
dataset_size: 546376771
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "eclassTrainST"
This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard. |
yzhuang/metatree_abalone | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 223516
num_examples: 2941
- name: validation
num_bytes: 93936
num_examples: 1236
download_size: 101819
dataset_size: 317452
---
# Dataset Card for "metatree_abalone"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qazisaad/llama_2_product_titles-esci_test-sft | ---
dataset_info:
features:
- name: index
dtype: int64
- name: query
dtype: string
- name: average_score
dtype: float64
- name: total_score
dtype: float64
- name: text
dtype: string
- name: label
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4761528
num_examples: 13996
download_size: 1243412
dataset_size: 4761528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama_2_product_titles-esci_test-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
JLB-JLB/seizure_eeg_iirFilter_greyscale_224x224_6secWindow | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: image
dtype: image
- name: epoch
dtype: int64
- name: label
dtype:
class_label:
names:
'0': bckg
'1': seiz
splits:
- name: train
num_bytes: 24002591090.568
num_examples: 814568
- name: dev
num_bytes: 12108190175.63
num_examples: 390190
- name: eval
num_bytes: 3341391277.28
num_examples: 114035
download_size: 13206623813
dataset_size: 39452172543.478
---
# Dataset Card for "seizure_eeg_iirFilter_greyscale_224x224_6secWindow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MattiaL/tapir-cleaned-116k | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Tapir-Cleaned
task_categories:
- text-generation
size_categories:
- 100K<n<1M
---
# Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform.
After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 116,862 high-quality recipes.
This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models,
allowing them to more accurately follow instructions and achieve superior performance.
The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning.
Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning.
### Supported Tasks and Leaderboards
The Tapir dataset designed for instruction training pretrained language models
### Languages
The data in Tapir are mainly in English (BCP-47 en).
# Dataset Structure
### Data Instances
```json
{
"instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.",
"input":"If lostphone is texted to my phone the volume will turn up to 100 so I can find it.",
"output":"IF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
"score":"0.804322",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf lostphone is texted to my phone the volume will turn up to 100 so I can find it.\n\n### Response:\nIF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform.
* `input`: context or input for the task. Each of the 116K input is unique.
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
* `score`: the correlation score obtained via BertForNextSentencePrediction
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| tapir | 116862 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{tapir,
author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante},
title = {TAPIR: Trigger Action Platform for Information Retrieval},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}},
}
``` |
rubrix/wildfire_tweets | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- other
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Tweets about Wildfire and climate change
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- rubrix
- climate change
task_categories:
- text-classification
task_ids: []
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-105000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1031501
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KrayIzuna/henrys | ---
license: openrail
---
|
dvilasuero/backup_filipino_dibt | ---
dataset_info:
features:
- name: source
dtype: string
id: field
- name: target
list:
- name: user_id
dtype: string
id: question
- name: value
dtype: string
id: suggestion
- name: status
dtype: string
id: question
- name: target-suggestion
dtype: string
id: suggestion
- name: target-suggestion-metadata
struct:
- name: type
dtype: string
id: suggestion-metadata
- name: score
dtype: float32
id: suggestion-metadata
- name: agent
dtype: string
id: suggestion-metadata
- name: external_id
dtype: string
id: external_id
- name: metadata
dtype: string
id: metadata
splits:
- name: train
num_bytes: 721139
num_examples: 501
download_size: 401053
dataset_size: 721139
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
orpo-explorers/OpenHermesPreferences-250k | ---
dataset_info:
features:
- name: source
dtype: string
- name: category
dtype: string
- name: prompt
dtype: string
- name: candidates_completions
sequence: string
- name: candidate_policies
sequence: string
- name: ranks
sequence: int64
- name: rank_str
dtype: string
- name: chosen_policy
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_policy
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1835058809.0834672
num_examples: 250000
download_size: 913952324
dataset_size: 1835058809.0834672
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liuyanchen1015/MULTI_VALUE_wnli_our_us | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: train
num_bytes: 796
num_examples: 4
download_size: 3180
dataset_size: 796
---
# Dataset Card for "MULTI_VALUE_wnli_our_us"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
g30rv17ys/octnormal200 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 13541578.0
num_examples: 200
download_size: 13542226
dataset_size: 13541578.0
---
# Dataset Card for "octnormal200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SniiKz/TrainingSetAlpha | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6064437
num_examples: 13056
download_size: 1214156
dataset_size: 6064437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mayflowergmbh/intel_orca_dpo_toybox | ---
license: apache-2.0
---
|
liuyanchen1015/MULTI_VALUE_stsb_indefinite_for_zero | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 239060
num_examples: 1423
- name: test
num_bytes: 186294
num_examples: 1257
- name: train
num_bytes: 829492
num_examples: 5305
download_size: 739468
dataset_size: 1254846
---
# Dataset Card for "MULTI_VALUE_stsb_indefinite_for_zero"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
assafm/counter-strike-001 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 279997
num_examples: 1373
download_size: 107410
dataset_size: 279997
---
# Dataset Card for "counter-strike-001"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_declare-lab__starling-7B | ---
pretty_name: Evaluation run of declare-lab/starling-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [declare-lab/starling-7B](https://huggingface.co/declare-lab/starling-7B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_declare-lab__starling-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T19:58:56.929438](https://huggingface.co/datasets/open-llm-leaderboard/details_declare-lab__starling-7B/blob/main/results_2024-02-09T19-58-56.929438.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.47683952622046394,\n\
\ \"acc_stderr\": 0.0344002540826661,\n \"acc_norm\": 0.4830040583763742,\n\
\ \"acc_norm_stderr\": 0.03519671795676814,\n \"mc1\": 0.3268053855569155,\n\
\ \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4817697697777851,\n\
\ \"mc2_stderr\": 0.015595723237294131\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.48208191126279865,\n \"acc_stderr\": 0.014602005585490978,\n\
\ \"acc_norm\": 0.5102389078498294,\n \"acc_norm_stderr\": 0.014608326906285012\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5793666600278828,\n\
\ \"acc_stderr\": 0.0049265184393722595,\n \"acc_norm\": 0.7676757618004382,\n\
\ \"acc_norm_stderr\": 0.004214515851745317\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45185185185185184,\n\
\ \"acc_stderr\": 0.04299268905480864,\n \"acc_norm\": 0.45185185185185184,\n\
\ \"acc_norm_stderr\": 0.04299268905480864\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.46710526315789475,\n \"acc_stderr\": 0.040601270352363966,\n\
\ \"acc_norm\": 0.46710526315789475,\n \"acc_norm_stderr\": 0.040601270352363966\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.45,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.5433962264150943,\n \"acc_stderr\": 0.03065674869673943,\n \
\ \"acc_norm\": 0.5433962264150943,\n \"acc_norm_stderr\": 0.03065674869673943\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4930555555555556,\n\
\ \"acc_stderr\": 0.04180806750294938,\n \"acc_norm\": 0.4930555555555556,\n\
\ \"acc_norm_stderr\": 0.04180806750294938\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411018,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411018\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n\
\ \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.45664739884393063,\n\
\ \"acc_stderr\": 0.03798106566014499,\n \"acc_norm\": 0.45664739884393063,\n\
\ \"acc_norm_stderr\": 0.03798106566014499\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n\
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3829787234042553,\n \"acc_stderr\": 0.03177821250236922,\n\
\ \"acc_norm\": 0.3829787234042553,\n \"acc_norm_stderr\": 0.03177821250236922\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n\
\ \"acc_stderr\": 0.04142439719489363,\n \"acc_norm\": 0.2631578947368421,\n\
\ \"acc_norm_stderr\": 0.04142439719489363\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.46206896551724136,\n \"acc_stderr\": 0.041546596717075474,\n\
\ \"acc_norm\": 0.46206896551724136,\n \"acc_norm_stderr\": 0.041546596717075474\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3148148148148148,\n \"acc_stderr\": 0.023919984164047732,\n \"\
acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.023919984164047732\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30952380952380953,\n\
\ \"acc_stderr\": 0.04134913018303316,\n \"acc_norm\": 0.30952380952380953,\n\
\ \"acc_norm_stderr\": 0.04134913018303316\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.5290322580645161,\n \"acc_stderr\": 0.028396016402761,\n \"acc_norm\"\
: 0.5290322580645161,\n \"acc_norm_stderr\": 0.028396016402761\n },\n\
\ \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3399014778325123,\n\
\ \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.3399014778325123,\n\
\ \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n \
\ },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"\
acc\": 0.5757575757575758,\n \"acc_stderr\": 0.03859268142070264,\n \
\ \"acc_norm\": 0.5757575757575758,\n \"acc_norm_stderr\": 0.03859268142070264\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6161616161616161,\n \"acc_stderr\": 0.03464881675016338,\n \"\
acc_norm\": 0.6161616161616161,\n \"acc_norm_stderr\": 0.03464881675016338\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.6994818652849741,\n \"acc_stderr\": 0.033088185944157494,\n\
\ \"acc_norm\": 0.6994818652849741,\n \"acc_norm_stderr\": 0.033088185944157494\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.44358974358974357,\n \"acc_stderr\": 0.025189149894764198,\n\
\ \"acc_norm\": 0.44358974358974357,\n \"acc_norm_stderr\": 0.025189149894764198\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2777777777777778,\n \"acc_stderr\": 0.027309140588230182,\n \
\ \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.027309140588230182\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.40756302521008403,\n \"acc_stderr\": 0.03191863374478466,\n\
\ \"acc_norm\": 0.40756302521008403,\n \"acc_norm_stderr\": 0.03191863374478466\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.6238532110091743,\n \"acc_stderr\": 0.02076923196820508,\n \"\
acc_norm\": 0.6238532110091743,\n \"acc_norm_stderr\": 0.02076923196820508\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608043,\n \"\
acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608043\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6274509803921569,\n \"acc_stderr\": 0.033933885849584046,\n \"\
acc_norm\": 0.6274509803921569,\n \"acc_norm_stderr\": 0.033933885849584046\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6244725738396625,\n \"acc_stderr\": 0.03152256243091156,\n \
\ \"acc_norm\": 0.6244725738396625,\n \"acc_norm_stderr\": 0.03152256243091156\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5650224215246636,\n\
\ \"acc_stderr\": 0.033272833702713445,\n \"acc_norm\": 0.5650224215246636,\n\
\ \"acc_norm_stderr\": 0.033272833702713445\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n\
\ \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6776859504132231,\n \"acc_stderr\": 0.04266416363352167,\n \"\
acc_norm\": 0.6776859504132231,\n \"acc_norm_stderr\": 0.04266416363352167\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n\
\ \"acc_stderr\": 0.045879047413018105,\n \"acc_norm\": 0.6574074074074074,\n\
\ \"acc_norm_stderr\": 0.045879047413018105\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5276073619631901,\n \"acc_stderr\": 0.0392237829061099,\n\
\ \"acc_norm\": 0.5276073619631901,\n \"acc_norm_stderr\": 0.0392237829061099\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.29464285714285715,\n\
\ \"acc_stderr\": 0.04327040932578727,\n \"acc_norm\": 0.29464285714285715,\n\
\ \"acc_norm_stderr\": 0.04327040932578727\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.5631067961165048,\n \"acc_stderr\": 0.04911147107365777,\n\
\ \"acc_norm\": 0.5631067961165048,\n \"acc_norm_stderr\": 0.04911147107365777\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7136752136752137,\n\
\ \"acc_stderr\": 0.02961432369045666,\n \"acc_norm\": 0.7136752136752137,\n\
\ \"acc_norm_stderr\": 0.02961432369045666\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.648786717752235,\n\
\ \"acc_stderr\": 0.01706998205149943,\n \"acc_norm\": 0.648786717752235,\n\
\ \"acc_norm_stderr\": 0.01706998205149943\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.523121387283237,\n \"acc_stderr\": 0.026890297881303125,\n\
\ \"acc_norm\": 0.523121387283237,\n \"acc_norm_stderr\": 0.026890297881303125\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5326797385620915,\n \"acc_stderr\": 0.02856869975222587,\n\
\ \"acc_norm\": 0.5326797385620915,\n \"acc_norm_stderr\": 0.02856869975222587\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5144694533762058,\n\
\ \"acc_stderr\": 0.02838619808417768,\n \"acc_norm\": 0.5144694533762058,\n\
\ \"acc_norm_stderr\": 0.02838619808417768\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.5216049382716049,\n \"acc_stderr\": 0.027794760105008736,\n\
\ \"acc_norm\": 0.5216049382716049,\n \"acc_norm_stderr\": 0.027794760105008736\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.33687943262411346,\n \"acc_stderr\": 0.02819553487396673,\n \
\ \"acc_norm\": 0.33687943262411346,\n \"acc_norm_stderr\": 0.02819553487396673\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3644067796610169,\n\
\ \"acc_stderr\": 0.012291694983056482,\n \"acc_norm\": 0.3644067796610169,\n\
\ \"acc_norm_stderr\": 0.012291694983056482\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.44485294117647056,\n \"acc_stderr\": 0.03018753206032939,\n\
\ \"acc_norm\": 0.44485294117647056,\n \"acc_norm_stderr\": 0.03018753206032939\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.434640522875817,\n \"acc_stderr\": 0.020054269200726463,\n \
\ \"acc_norm\": 0.434640522875817,\n \"acc_norm_stderr\": 0.020054269200726463\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4818181818181818,\n\
\ \"acc_stderr\": 0.04785964010794916,\n \"acc_norm\": 0.4818181818181818,\n\
\ \"acc_norm_stderr\": 0.04785964010794916\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5265306122448979,\n \"acc_stderr\": 0.03196412734523272,\n\
\ \"acc_norm\": 0.5265306122448979,\n \"acc_norm_stderr\": 0.03196412734523272\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6616915422885572,\n\
\ \"acc_stderr\": 0.03345563070339193,\n \"acc_norm\": 0.6616915422885572,\n\
\ \"acc_norm_stderr\": 0.03345563070339193\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3674698795180723,\n\
\ \"acc_stderr\": 0.03753267402120575,\n \"acc_norm\": 0.3674698795180723,\n\
\ \"acc_norm_stderr\": 0.03753267402120575\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6783625730994152,\n \"acc_stderr\": 0.03582529442573122,\n\
\ \"acc_norm\": 0.6783625730994152,\n \"acc_norm_stderr\": 0.03582529442573122\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3268053855569155,\n\
\ \"mc1_stderr\": 0.01641987473113503,\n \"mc2\": 0.4817697697777851,\n\
\ \"mc2_stderr\": 0.015595723237294131\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7056037884767167,\n \"acc_stderr\": 0.012809427134352408\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10083396512509477,\n \
\ \"acc_stderr\": 0.008294031192126588\n }\n}\n```"
repo_url: https://huggingface.co/declare-lab/starling-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|arc:challenge|25_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|gsm8k|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hellaswag|10_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T19-58-56.929438.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T19-58-56.929438.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- '**/details_harness|winogrande|5_2024-02-09T19-58-56.929438.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T19-58-56.929438.parquet'
- config_name: results
data_files:
- split: 2024_02_09T19_58_56.929438
path:
- results_2024-02-09T19-58-56.929438.parquet
- split: latest
path:
- results_2024-02-09T19-58-56.929438.parquet
---
# Dataset Card for Evaluation run of declare-lab/starling-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [declare-lab/starling-7B](https://huggingface.co/declare-lab/starling-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_declare-lab__starling-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T19:58:56.929438](https://huggingface.co/datasets/open-llm-leaderboard/details_declare-lab__starling-7B/blob/main/results_2024-02-09T19-58-56.929438.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.47683952622046394,
"acc_stderr": 0.0344002540826661,
"acc_norm": 0.4830040583763742,
"acc_norm_stderr": 0.03519671795676814,
"mc1": 0.3268053855569155,
"mc1_stderr": 0.01641987473113503,
"mc2": 0.4817697697777851,
"mc2_stderr": 0.015595723237294131
},
"harness|arc:challenge|25": {
"acc": 0.48208191126279865,
"acc_stderr": 0.014602005585490978,
"acc_norm": 0.5102389078498294,
"acc_norm_stderr": 0.014608326906285012
},
"harness|hellaswag|10": {
"acc": 0.5793666600278828,
"acc_stderr": 0.0049265184393722595,
"acc_norm": 0.7676757618004382,
"acc_norm_stderr": 0.004214515851745317
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45185185185185184,
"acc_stderr": 0.04299268905480864,
"acc_norm": 0.45185185185185184,
"acc_norm_stderr": 0.04299268905480864
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.46710526315789475,
"acc_stderr": 0.040601270352363966,
"acc_norm": 0.46710526315789475,
"acc_norm_stderr": 0.040601270352363966
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5433962264150943,
"acc_stderr": 0.03065674869673943,
"acc_norm": 0.5433962264150943,
"acc_norm_stderr": 0.03065674869673943
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4930555555555556,
"acc_stderr": 0.04180806750294938,
"acc_norm": 0.4930555555555556,
"acc_norm_stderr": 0.04180806750294938
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411018,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411018
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.45664739884393063,
"acc_stderr": 0.03798106566014499,
"acc_norm": 0.45664739884393063,
"acc_norm_stderr": 0.03798106566014499
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808778,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808778
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3829787234042553,
"acc_stderr": 0.03177821250236922,
"acc_norm": 0.3829787234042553,
"acc_norm_stderr": 0.03177821250236922
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489363,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489363
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.46206896551724136,
"acc_stderr": 0.041546596717075474,
"acc_norm": 0.46206896551724136,
"acc_norm_stderr": 0.041546596717075474
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.023919984164047732,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.023919984164047732
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30952380952380953,
"acc_stderr": 0.04134913018303316,
"acc_norm": 0.30952380952380953,
"acc_norm_stderr": 0.04134913018303316
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5290322580645161,
"acc_stderr": 0.028396016402761,
"acc_norm": 0.5290322580645161,
"acc_norm_stderr": 0.028396016402761
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3399014778325123,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.3399014778325123,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5757575757575758,
"acc_stderr": 0.03859268142070264,
"acc_norm": 0.5757575757575758,
"acc_norm_stderr": 0.03859268142070264
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6161616161616161,
"acc_stderr": 0.03464881675016338,
"acc_norm": 0.6161616161616161,
"acc_norm_stderr": 0.03464881675016338
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6994818652849741,
"acc_stderr": 0.033088185944157494,
"acc_norm": 0.6994818652849741,
"acc_norm_stderr": 0.033088185944157494
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.44358974358974357,
"acc_stderr": 0.025189149894764198,
"acc_norm": 0.44358974358974357,
"acc_norm_stderr": 0.025189149894764198
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230182,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230182
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.40756302521008403,
"acc_stderr": 0.03191863374478466,
"acc_norm": 0.40756302521008403,
"acc_norm_stderr": 0.03191863374478466
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6238532110091743,
"acc_stderr": 0.02076923196820508,
"acc_norm": 0.6238532110091743,
"acc_norm_stderr": 0.02076923196820508
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608043,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6274509803921569,
"acc_stderr": 0.033933885849584046,
"acc_norm": 0.6274509803921569,
"acc_norm_stderr": 0.033933885849584046
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6244725738396625,
"acc_stderr": 0.03152256243091156,
"acc_norm": 0.6244725738396625,
"acc_norm_stderr": 0.03152256243091156
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5650224215246636,
"acc_stderr": 0.033272833702713445,
"acc_norm": 0.5650224215246636,
"acc_norm_stderr": 0.033272833702713445
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5648854961832062,
"acc_stderr": 0.04348208051644858,
"acc_norm": 0.5648854961832062,
"acc_norm_stderr": 0.04348208051644858
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6776859504132231,
"acc_stderr": 0.04266416363352167,
"acc_norm": 0.6776859504132231,
"acc_norm_stderr": 0.04266416363352167
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.045879047413018105,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.045879047413018105
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5276073619631901,
"acc_stderr": 0.0392237829061099,
"acc_norm": 0.5276073619631901,
"acc_norm_stderr": 0.0392237829061099
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.29464285714285715,
"acc_stderr": 0.04327040932578727,
"acc_norm": 0.29464285714285715,
"acc_norm_stderr": 0.04327040932578727
},
"harness|hendrycksTest-management|5": {
"acc": 0.5631067961165048,
"acc_stderr": 0.04911147107365777,
"acc_norm": 0.5631067961165048,
"acc_norm_stderr": 0.04911147107365777
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7136752136752137,
"acc_stderr": 0.02961432369045666,
"acc_norm": 0.7136752136752137,
"acc_norm_stderr": 0.02961432369045666
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.648786717752235,
"acc_stderr": 0.01706998205149943,
"acc_norm": 0.648786717752235,
"acc_norm_stderr": 0.01706998205149943
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.523121387283237,
"acc_stderr": 0.026890297881303125,
"acc_norm": 0.523121387283237,
"acc_norm_stderr": 0.026890297881303125
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5326797385620915,
"acc_stderr": 0.02856869975222587,
"acc_norm": 0.5326797385620915,
"acc_norm_stderr": 0.02856869975222587
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5144694533762058,
"acc_stderr": 0.02838619808417768,
"acc_norm": 0.5144694533762058,
"acc_norm_stderr": 0.02838619808417768
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5216049382716049,
"acc_stderr": 0.027794760105008736,
"acc_norm": 0.5216049382716049,
"acc_norm_stderr": 0.027794760105008736
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.33687943262411346,
"acc_stderr": 0.02819553487396673,
"acc_norm": 0.33687943262411346,
"acc_norm_stderr": 0.02819553487396673
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3644067796610169,
"acc_stderr": 0.012291694983056482,
"acc_norm": 0.3644067796610169,
"acc_norm_stderr": 0.012291694983056482
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.44485294117647056,
"acc_stderr": 0.03018753206032939,
"acc_norm": 0.44485294117647056,
"acc_norm_stderr": 0.03018753206032939
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.434640522875817,
"acc_stderr": 0.020054269200726463,
"acc_norm": 0.434640522875817,
"acc_norm_stderr": 0.020054269200726463
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4818181818181818,
"acc_stderr": 0.04785964010794916,
"acc_norm": 0.4818181818181818,
"acc_norm_stderr": 0.04785964010794916
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5265306122448979,
"acc_stderr": 0.03196412734523272,
"acc_norm": 0.5265306122448979,
"acc_norm_stderr": 0.03196412734523272
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6616915422885572,
"acc_stderr": 0.03345563070339193,
"acc_norm": 0.6616915422885572,
"acc_norm_stderr": 0.03345563070339193
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3674698795180723,
"acc_stderr": 0.03753267402120575,
"acc_norm": 0.3674698795180723,
"acc_norm_stderr": 0.03753267402120575
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6783625730994152,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.6783625730994152,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3268053855569155,
"mc1_stderr": 0.01641987473113503,
"mc2": 0.4817697697777851,
"mc2_stderr": 0.015595723237294131
},
"harness|winogrande|5": {
"acc": 0.7056037884767167,
"acc_stderr": 0.012809427134352408
},
"harness|gsm8k|5": {
"acc": 0.10083396512509477,
"acc_stderr": 0.008294031192126588
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nicholasbien/lakh-dataset-full-tokenized-gpt2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1478210437
num_examples: 13560
- name: test
num_bytes: 372102436
num_examples: 3390
download_size: 656067053
dataset_size: 1850312873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
CVasNLPExperiments/Imagenet1k_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_50000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_laion_ViT_H_14_2B_simple_specific_rices
num_bytes: 21191760
num_examples: 50000
- name: fewshot_0__Attributes_ViT_L_14_descriptors_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 22301150
num_examples: 50000
download_size: 16305421
dataset_size: 43492910
---
# Dataset Card for "Imagenet1k_validation_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_50000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svhn | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- image-classification
- object-detection
task_ids: []
paperswithcode_id: svhn
pretty_name: Street View House Numbers
dataset_info:
- config_name: full_numbers
features:
- name: image
dtype: image
- name: digits
sequence:
- name: bbox
sequence: int32
length: 4
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 390404309
num_examples: 33402
- name: test
num_bytes: 271503052
num_examples: 13068
- name: extra
num_bytes: 1868720340
num_examples: 202353
download_size: 2636187279
dataset_size: 2530627701
- config_name: cropped_digits
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
splits:
- name: train
num_bytes: 128364360
num_examples: 73257
- name: test
num_bytes: 44464040
num_examples: 26032
- name: extra
num_bytes: 967853504
num_examples: 531131
download_size: 1575594780
dataset_size: 1140681904
---
# Dataset Card for Street View House Numbers
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://ufldl.stanford.edu/housenumbers
- **Repository:**
- **Paper:** [Reading Digits in Natural Images with Unsupervised Feature Learning](http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf)
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-svhn
- **Point of Contact:** streetviewhousenumbers@gmail.com
### Dataset Summary
SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. The dataset comes in two formats:
1. Original images with character level bounding boxes.
2. MNIST-like 32-by-32 images centered around a single character (many of the images do contain some distractors at the sides).
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for digit detection.
- `image-classification`: The dataset can be used to train a model for Image Classification where the task is to predict a correct digit on the image. The leaderboard for this task is available at:
https://paperswithcode.com/sota/image-classification-on-svhn
### Languages
English
## Dataset Structure
### Data Instances
#### full_numbers
The original, variable-resolution, color house-number images with character level bounding boxes.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=98x48 at 0x259E3F01780>,
'digits': {
'bbox': [
[36, 7, 13, 32],
[50, 7, 12, 32]
],
'label': [6, 9]
}
}
```
#### cropped_digits
Character level ground truth in an MNIST-like format. All digits have been resized to a fixed resolution of 32-by-32 pixels. The original character bounding boxes are extended in the appropriate dimension to become square windows, so that resizing them to 32-by-32 pixels does not introduce aspect ratio distortions. Nevertheless this preprocessing introduces some distracting digits to the sides of the digit of interest.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x25A89494780>,
'label': 1
}
```
### Data Fields
#### full_numbers
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `digits`: a dictionary containing digits' bounding boxes and labels
- `bbox`: a list of bounding boxes (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format) corresponding to the digits present on the image
- `label`: a list of integers between 0 and 9 representing the digit.
#### cropped_digits
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `digit`: an integer between 0 and 9 representing the digit.
### Data Splits
#### full_numbers
The data is split into training, test and extra set. The training set contains 33402 images, test set 13068 and the extra set 202353 images.
#### cropped_digits
The data is split into training, test and extra set. The training set contains 73257 images, test set 26032 and the extra set 531131 images.
The extra set can be used as extra training data. The extra set was obtained in a similar manner to the training and test set, but with the increased detection threshold in order to generate this large amount of labeled data. The SVHN extra subset is thus somewhat biased toward less difficult detections, and is thus easier than SVHN train/SVHN test.
## Dataset Creation
### Curation Rationale
From the paper:
> As mentioned above, the venerable MNIST dataset has been a valuable goal post for researchers seeking to build better learning systems whose benchmark performance could be expected to translate into improved performance on realistic applications. However, computers have now reached essentially human levels of performance on this problem—a testament to progress in machine learning and computer vision. The Street View House Numbers (SVHN) digit database that we provide can be seen as similar in flavor to MNIST (e.g., the images are of small cropped characters), but the SVHN dataset incorporates an order of magnitude more labeled data and comes from a significantly harder, unsolved, real world problem. Here the gap between human performance and state of the art feature representations is significant. Going forward, we expect that this dataset may fulfill a similar role for modern feature learning algorithms: it provides a new and difficult benchmark where increased performance can be expected to translate into tangible gains on a realistic application.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> The SVHN dataset was obtained from a large number of Street View images using a combination
of automated algorithms and the Amazon Mechanical Turk (AMT) framework, which was
used to localize and transcribe the single digits. We downloaded a very large set of images from
urban areas in various countries.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
From the paper:
> From these randomly selected images, the house-number patches were extracted using a dedicated sliding window house-numbers detector using a low threshold on the detector’s confidence score in order to get a varied, unbiased dataset of house-number signs. These low precision detections were screened and transcribed by AMT workers.
#### Who are the annotators?
The AMT workers.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu and Andrew Y. Ng
### Licensing Information
Non-commerical use only.
### Citation Information
```
@article{netzer2011reading,
title={Reading digits in natural images with unsupervised feature learning},
author={Netzer, Yuval and Wang, Tao and Coates, Adam and Bissacco, Alessandro and Wu, Bo and Ng, Andrew Y},
year={2011}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
mtkinit/Super-sentiment | ---
pretty_name: Super-sentiment
---
# Super-sentiment
Created from AIOD platform |
BUDDI-AI/BUDDI-Table-Factory | ---
license: apache-2.0
---
***About***
We release BTF1K dataset, which contains 1000 synthetically generated documents with table and cell annotations.
The dataset was generated synthetically using BUDDI Table Factory. |
Quake24/paraphrasedTwitter | ---
license: apache-2.0
---
|
LaierTwoLabsInc/BitcoinMaximalism | ---
dataset_info:
features:
- name: Categories
dtype: string
- name: Question
dtype: string
- name: Expected Answer
dtype: string
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- Bitcoin
- finance
- Austrian economics
- economics
- Basedness
---
# Bitcoin Maximalism Benchmark Dataset
## Description
The Bitcoin Maximalism Benchmark is designed to evaluate the understanding and expertise of language models (LLMs) in various dimensions related to Bitcoin. It spans a array of topics from “basedness” (ie anti-woke bias), Austrian Economics principles, Bitcoin technology and its distinctions from other cryptocurrencies, Bitcoin’s historical and cultural significance, and Bitcoin’s impact on society and the economy. This dataset aims to set a new standard for assessing LLMs on their ‘basedness, depth of Bitcoin knowledge, adherence to Bitcoin principles, and understanding of related protocols.
## Dataset Structure
The dataset is structured into several categories, each focusing on a distinct aspect of Bitcoin and its broader ecosystem. Below is an overview of the categories and the type of questions or content each contains:
### Categories
- **Basedness**: Questions and statements evaluating an LLM's alignment with anti-‘woke’ and core libertarian values and its perspective on various socio-political issues.
- **Austrian Economics**: Items related to fiat currencies, sound money principles, inflation, and the economic theories underpinning Bitcoin.
- **Bitcoin vs. Crypto**: Content distinguishing Bitcoin from other cryptocurrencies, addressing topics like investment strategies, fundamental differences, and common misconceptions.
- **Bitcoin Technology**: Questions covering the Bitcoin protocol, mining mechanisms, transaction processes, and overall network structure.
- **Bitcoin Fixes This**: Scenarios and questions exploring Bitcoin's potential impact on the economy, society, the environment, and global adoption.
- **Bitcoin History & Culture**: Content covering key events, figures, and cultural milestones in Bitcoin's development, including the legacy of Satoshi Nakamoto.
- **Bitcoin Principles**: Items focused on core principles such as self-custody, privacy, scaling debates, censorship resistance, decentralization, and important community acronyms like DTV and NYKNYC.
- **Adjacent Protocols**: Questions on Layer 2 solutions, the Lightning Network, Nostr, Liquid, and other protocols that complement or build upon the Bitcoin blockchain.
## Format
The dataset is provided in HuggingFace Parquet format, with each row containing the following fields:
- `Categories`: The category to which the item belongs.
- `Question`: The question or statement presented for evaluation.
- `Expected Answer`: The ‘correct’ answer expected by the model for the given question.
```
{'Categories': 'Austrian Economics (fiat, sound money, inflation)',
'Question': "Won't a deflationary monetary system discourage spending, encourage hoarding and cause economic stagnation?\t\t",
'Expected Answer': 'No, a deflationary money investment over consumption which leads to consistent and sustainable growth rather than boom and bust cycles of inflationary money.'}
```
## Usage
This dataset is intended for researchers, developers, and enthusiasts aiming to evaluate and improve the Bitcoin-related knowledge of language models. It can be used as a basis for generating training data for improving models performance related to Bitcoin, enhance the understanding of Bitcoin principles, reduce ‘wokeness’ or benchmark new and existing models for their expertise in the domain.
```
from datasets import load_dataset
dataset = load_dataset("LaierTwoLabsInc/BitcoinMaximalism")
dataset['train'][0]
Output:
{'Categories': 'Bitcoin vs Crypto (shitcoins, investing, etc)',
'Question': 'Why is Bitcoin so slow?',
'Expected Answer': 'Bitcoin\'s "slowness" is an intentional design decision of block time and Proof of Work consensus mechanism which prioritizes security and decentralization over speed of transactions. Faster transactions can happen on higher layers such as lightning.'}
```
## License
This dataset is published under Apache 2.0, which allows for personal, academic and commercial use.
## Citation
If you use this dataset in your research or applications, please cite it as follows:
```bibtex
@dataset{bitcoin_knowledge_benchmark,
title={Bitcoin Maximalism Benchmark Dataset},
author={Laier Two Labs},
year={2024},
url={https://huggingface.co/datasets/LaierTwoLabsInc/BitcoinMaximalism},
}
```
## Contact
For questions, suggestions, or contributions to the dataset, please contact: satoshi@spiritofsatoshi.ai |
AttainBase/AttainDataset | ---
license: openrail
---
|
demo-org/auditor_review | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
pretty_name: Auditor_Review
---
# Dataset Card for Auditor_Review
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
Auditor review data collected by News Department
- **Point of Contact:**
Talked to COE for Auditing, currently sue@demo.org
### Dataset Summary
Auditor sentiment dataset of sentences from financial news. The dataset consists of 3500 sentences from English language financial news categorized by sentiment. The dataset is divided by the agreement rate of 5-8 annotators.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
"sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive' - (2), 'neutral' - (1), or 'negative' - (0)
Complete data code is [available here](https://www.datafiles.samhsa.gov/get-help/codebooks/what-codebook)
### Data Splits
A train/test split was created randomly with a 75/25 split
## Dataset Creation
### Curation Rationale
To gather our auditor evaluations into one dataset. Previous attempts using off-the-shelf sentiment had only 70% F1, this dataset was an attempt to improve upon that performance.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news reports.
#### Who are the source language producers?
The source data was written by various auditors.
### Annotations
#### Annotation process
This release of the auditor reviews covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge of financial markets. The subset here is where inter-annotation agreement was greater than 75%.
#### Who are the annotators?
They were pulled from the SME list, names are held by sue@demo.org
### Personal and Sensitive Information
There is no personal or sensitive information in this dataset.
## Considerations for Using the Data
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
The [Dataset Measurement tool](https://huggingface.co/spaces/huggingface/data-measurements-tool) identified these bias statistics:

### Other Known Limitations
[More Information Needed]
### Licensing Information
License: Demo.Org Proprietary - DO NOT SHARE |
coref-data/corefud_indiscrim | ---
dataset_info:
- config_name: ca_ancora-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 38341803
num_examples: 1011
- name: validation
num_bytes: 5660530
num_examples: 131
download_size: 7906331
dataset_size: 44002333
- config_name: cs_pcedt-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 149583151
num_examples: 1875
- name: validation
num_bytes: 26160516
num_examples: 337
download_size: 31260936
dataset_size: 175743667
- config_name: cs_pdt-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 109542424
num_examples: 2533
- name: validation
num_bytes: 14886840
num_examples: 316
download_size: 23982751
dataset_size: 124429264
- config_name: de_parcorfull-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 1035732
num_examples: 15
- name: validation
num_bytes: 132412
num_examples: 2
download_size: 273217
dataset_size: 1168144
- config_name: de_potsdamcc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 3999054
num_examples: 142
- name: validation
num_bytes: 511557
num_examples: 17
download_size: 859121
dataset_size: 4510611
- config_name: en_gum-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: string
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 17919310
num_examples: 151
- name: validation
num_bytes: 2369056
num_examples: 22
download_size: 4234788
dataset_size: 20288366
- config_name: en_parcorfull-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 899917
num_examples: 15
- name: validation
num_bytes: 115587
num_examples: 2
download_size: 259976
dataset_size: 1015504
- config_name: es_ancora-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 43242148
num_examples: 1080
- name: validation
num_bytes: 5404400
num_examples: 131
download_size: 8758107
dataset_size: 48646548
- config_name: fr_democrat-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: 'null'
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 23704875
num_examples: 50
- name: validation
num_bytes: 2914195
num_examples: 46
download_size: 5011046
dataset_size: 26619070
- config_name: hu_korkor-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 2358029
num_examples: 76
- name: validation
num_bytes: 305829
num_examples: 9
download_size: 644899
dataset_size: 2663858
- config_name: hu_szegedkoref-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 11618556
num_examples: 320
- name: validation
num_bytes: 1365657
num_examples: 40
download_size: 2509790
dataset_size: 12984213
- config_name: lt_lcc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 3908009
num_examples: 80
- name: validation
num_bytes: 435994
num_examples: 10
download_size: 802890
dataset_size: 4344003
- config_name: no_bokmaalnarc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: 'null'
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 21847333
num_examples: 284
- name: validation
num_bytes: 2319889
num_examples: 31
download_size: 4979662
dataset_size: 24167222
- config_name: no_nynorsknarc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: 'null'
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 18472313
num_examples: 336
- name: validation
num_bytes: 1904614
num_examples: 28
download_size: 4209149
dataset_size: 20376927
- config_name: pl_pcc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: float64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 68325348
num_examples: 1463
- name: validation
num_bytes: 8583039
num_examples: 183
download_size: 14971275
dataset_size: 76908387
- config_name: ru_rucor-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: 'null'
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 15595222
num_examples: 145
- name: validation
num_bytes: 2685627
num_examples: 18
download_size: 3651673
dataset_size: 18280849
- config_name: tr_itcc-corefud
features:
- name: sentences
list:
- name: id
dtype: int64
- name: speaker
dtype: 'null'
- name: text
dtype: string
- name: tokens
list:
- name: deprel
dtype: string
- name: feats
dtype: string
- name: head
dtype: int64
- name: id
dtype: int64
- name: lemma
dtype: string
- name: misc
dtype: string
- name: text
dtype: string
- name: upos
dtype: string
- name: xpos
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: coref_chains
sequence:
sequence:
sequence: int64
- name: genre
dtype: 'null'
- name: meta_data
struct:
- name: comment
dtype: string
splits:
- name: train
num_bytes: 5399055
num_examples: 19
- name: validation
num_bytes: 599026
num_examples: 2
download_size: 1158897
dataset_size: 5998081
configs:
- config_name: ca_ancora-corefud
data_files:
- split: train
path: ca_ancora-corefud/train-*
- split: validation
path: ca_ancora-corefud/validation-*
- config_name: cs_pcedt-corefud
data_files:
- split: train
path: cs_pcedt-corefud/train-*
- split: validation
path: cs_pcedt-corefud/validation-*
- config_name: cs_pdt-corefud
data_files:
- split: train
path: cs_pdt-corefud/train-*
- split: validation
path: cs_pdt-corefud/validation-*
- config_name: de_parcorfull-corefud
data_files:
- split: train
path: de_parcorfull-corefud/train-*
- split: validation
path: de_parcorfull-corefud/validation-*
- config_name: de_potsdamcc-corefud
data_files:
- split: train
path: de_potsdamcc-corefud/train-*
- split: validation
path: de_potsdamcc-corefud/validation-*
- config_name: en_gum-corefud
data_files:
- split: train
path: en_gum-corefud/train-*
- split: validation
path: en_gum-corefud/validation-*
- config_name: en_parcorfull-corefud
data_files:
- split: train
path: en_parcorfull-corefud/train-*
- split: validation
path: en_parcorfull-corefud/validation-*
- config_name: es_ancora-corefud
data_files:
- split: train
path: es_ancora-corefud/train-*
- split: validation
path: es_ancora-corefud/validation-*
- config_name: fr_democrat-corefud
data_files:
- split: train
path: fr_democrat-corefud/train-*
- split: validation
path: fr_democrat-corefud/validation-*
- config_name: hu_korkor-corefud
data_files:
- split: train
path: hu_korkor-corefud/train-*
- split: validation
path: hu_korkor-corefud/validation-*
- config_name: hu_szegedkoref-corefud
data_files:
- split: train
path: hu_szegedkoref-corefud/train-*
- split: validation
path: hu_szegedkoref-corefud/validation-*
- config_name: lt_lcc-corefud
data_files:
- split: train
path: lt_lcc-corefud/train-*
- split: validation
path: lt_lcc-corefud/validation-*
- config_name: no_bokmaalnarc-corefud
data_files:
- split: train
path: no_bokmaalnarc-corefud/train-*
- split: validation
path: no_bokmaalnarc-corefud/validation-*
- config_name: no_nynorsknarc-corefud
data_files:
- split: train
path: no_nynorsknarc-corefud/train-*
- split: validation
path: no_nynorsknarc-corefud/validation-*
- config_name: pl_pcc-corefud
data_files:
- split: train
path: pl_pcc-corefud/train-*
- split: validation
path: pl_pcc-corefud/validation-*
- config_name: ru_rucor-corefud
data_files:
- split: train
path: ru_rucor-corefud/train-*
- split: validation
path: ru_rucor-corefud/validation-*
- config_name: tr_itcc-corefud
data_files:
- split: train
path: tr_itcc-corefud/train-*
- split: validation
path: tr_itcc-corefud/validation-*
---
This dataset was generated by reformatting [`coref-data/corefud_raw`](https://huggingface.co/datasets/coref-data/corefud_raw) into the indiscrim coreference format. See that repo for dataset details.
See [ianporada/coref-data](https://github.com/ianporada/coref-data) for additional conversion details and the conversion script.
Please create an issue in the repo above or in this dataset repo for any questions.
|
Marchanjo/spider-FIT-en-pt-es-fr | ---
license: cc-by-sa-4.0
---
Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) |
irds/mr-tydi_ko_test | ---
pretty_name: '`mr-tydi/ko/test`'
viewer: false
source_datasets: ['irds/mr-tydi_ko']
task_categories:
- text-retrieval
---
# Dataset Card for `mr-tydi/ko/test`
The `mr-tydi/ko/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mr-tydi#mr-tydi/ko/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=421
- `qrels`: (relevance assessments); count=492
- For `docs`, use [`irds/mr-tydi_ko`](https://huggingface.co/datasets/irds/mr-tydi_ko)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/mr-tydi_ko_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/mr-tydi_ko_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Zhang2021MrTyDi,
title={{Mr. TyDi}: A Multi-lingual Benchmark for Dense Retrieval},
author={Xinyu Zhang and Xueguang Ma and Peng Shi and Jimmy Lin},
year={2021},
journal={arXiv:2108.08787},
}
@article{Clark2020TyDiQa,
title={{TyDi QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author={Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki},
year={2020},
journal={Transactions of the Association for Computational Linguistics}
}
```
|
CyberHarem/nagisa_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nagisa/桐藤ナギサ/渚 (Blue Archive)
This is the dataset of nagisa/桐藤ナギサ/渚 (Blue Archive), containing 334 images and their tags.
The core tags of this character are `long_hair, halo, hair_ornament, hair_flower, wings, white_wings, angel_wings, feathered_wings, light_brown_hair, hair_between_eyes, yellow_eyes, breasts, braid`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 334 | 502.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagisa_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 334 | 421.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagisa_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 820 | 848.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagisa_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nagisa_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, black_neckerchief, holding_cup, long_sleeves, looking_at_viewer, sailor_collar, smile, solo, teacup, white_dress, white_flower, black_pantyhose, blush, closed_mouth, holding_saucer, sitting, brown_eyes, gun |
| 1 | 6 |  |  |  |  |  | 1girl, cleavage, collarbone, flower, simple_background, solo, white_background, closed_mouth, medium_breasts, smile, looking_at_viewer, blonde_hair, bra, brown_eyes, navel |
| 2 | 5 |  |  |  |  |  | 1girl, alternate_costume, blush, fake_animal_ears, flower, playboy_bunny, rabbit_ears, solo, closed_mouth, detached_collar, looking_at_viewer, simple_background, strapless_leotard, white_background, bare_shoulders, cleavage, highleg_leotard, large_breasts, medium_breasts, white_leotard, black_bowtie, groin, smile, thighhighs, wrist_cuffs |
| 3 | 9 |  |  |  |  |  | 1boy, 1girl, blush, flower, hetero, nipples, solo_focus, completely_nude, navel, medium_breasts, open_mouth, penis, sex, looking_at_viewer, pussy, vaginal, brown_eyes, censored, collarbone, dark-skinned_male, pov, sweat |
| 4 | 6 |  |  |  |  |  | 1girl, alternate_costume, blush, solo, flower, looking_at_viewer, outdoors, bare_shoulders, black_bikini, blue_sky, brown_eyes, collarbone, day, navel, stomach, cowboy_shot, frilled_bikini, medium_breasts, ocean |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_neckerchief | holding_cup | long_sleeves | looking_at_viewer | sailor_collar | smile | solo | teacup | white_dress | white_flower | black_pantyhose | blush | closed_mouth | holding_saucer | sitting | brown_eyes | gun | cleavage | collarbone | flower | simple_background | white_background | medium_breasts | blonde_hair | bra | navel | alternate_costume | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | strapless_leotard | bare_shoulders | highleg_leotard | large_breasts | white_leotard | black_bowtie | groin | thighhighs | wrist_cuffs | 1boy | hetero | nipples | solo_focus | completely_nude | open_mouth | penis | sex | pussy | vaginal | censored | dark-skinned_male | pov | sweat | outdoors | black_bikini | blue_sky | day | stomach | cowboy_shot | frilled_bikini | ocean |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:--------------|:---------------|:--------------------|:----------------|:--------|:-------|:---------|:--------------|:---------------|:------------------|:--------|:---------------|:-----------------|:----------|:-------------|:------|:-----------|:-------------|:---------|:--------------------|:-------------------|:-----------------|:--------------|:------|:--------|:--------------------|:-------------------|:----------------|:--------------|:------------------|:--------------------|:-----------------|:------------------|:----------------|:----------------|:---------------|:--------|:-------------|:--------------|:-------|:---------|:----------|:-------------|:------------------|:-------------|:--------|:------|:--------|:----------|:-----------|:--------------------|:------|:--------|:-----------|:---------------|:-----------|:------|:----------|:--------------|:-----------------|:--------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | | | | X | | X | X | | | | | | X | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 5 |  |  |  |  |  | X | | | | X | | X | X | | | | | X | X | | | | | X | | X | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | | | | X | | | | | | | | X | | | | X | | | X | X | | | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | | X | | | X | | | | | X | | | | X | | | X | X | | | X | | | X | X | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
|
semeru/Code-Code-CloneDetection-BigCloneBench | ---
license: mit
Programminglanguage: "Java"
version: "N/A"
Date: "2014 Big clone bench paper https://www.cs.usask.ca/faculty/croy/papers/2014/SvajlenkoICSME2014BigERA.pdf"
Contaminated: "Very Likely"
Size: "Standard Tokenizer"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Clone-detection-BigCloneBench in Semeru
# CodeXGLUE -- Clone Detection (BCB)
## Task Definition
Given two codes as the input, the task is to do binary classification (0/1), where 1 stands for semantic equivalence and 0 for others. Models are evaluated by F1 score.
## Updates
2021-9-13: We have update the evaluater script. Since it's a binary classification, we use binary F1 score instead of "macro" F1 score.
## Dataset
The dataset we use is [BigCloneBench](https://www.cs.usask.ca/faculty/croy/papers/2014/SvajlenkoICSME2014BigERA.pdf) and filtered following the paper [Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree](https://arxiv.org/pdf/2002.08653.pdf).
### Data Format
1. dataset/data.jsonl is stored in jsonlines format. Each line in the uncompressed file represents one function. One row is illustrated below.
- **func:** the function
- **idx:** index of the example
2. train.txt/valid.txt/test.txt provide examples, stored in the following format: idx1 idx2 label
### Data Statistics
Data statistics of the dataset are shown in the below table:
| | #Examples |
| ----- | :-------: |
| Train | 901,028 |
| Dev | 415,416 |
| Test | 415,416 |
## Reference
<pre><code>@inproceedings{svajlenko2014towards,
title={Towards a big data curated benchmark of inter-project code clones},
author={Svajlenko, Jeffrey and Islam, Judith F and Keivanloo, Iman and Roy, Chanchal K and Mia, Mohammad Mamun},
booktitle={2014 IEEE International Conference on Software Maintenance and Evolution},
pages={476--480},
year={2014},
organization={IEEE}
}
@inproceedings{wang2020detecting,
title={Detecting Code Clones with Graph Neural Network and Flow-Augmented Abstract Syntax Tree},
author={Wang, Wenhan and Li, Ge and Ma, Bo and Xia, Xin and Jin, Zhi},
booktitle={2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER)},
pages={261--271},
year={2020},
organization={IEEE}
}</code></pre>
|
Odiseo/odiseoface | ---
license: artistic-2.0
---
|
elmambru/urv_test | ---
task_categories:
- table-question-answering
tags:
- code
license: apache-2.0
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Falah/arabic_glamour_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1949534
num_examples: 10000
download_size: 328987
dataset_size: 1949534
---
# Dataset Card for "arabic_glamour_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ernie-ai/image-text-examples-ar-cn-latin-notext | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AR_docs
'1': CN_docs
'2': Latin_docs
'3': non-text
splits:
- name: train
num_bytes: 27290843.67117117
num_examples: 754
- name: test
num_bytes: 4701416.328828828
num_examples: 134
download_size: 31849475
dataset_size: 31992260.0
---
# Dataset Card for "image-text-examples-ar-cn-latin-notext"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
frostymelonade/BOWS2-S-UNIWARD-Stego-Classification | ---
task_categories:
- image-classification
tags:
- steganography
pretty_name: 0.2 BPP stego classification from GBRASNET BOWS2 S-UNIWARD
--- |
lhallee/CC_fold | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: seqs
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 39394496
num_examples: 26224
- name: valid
num_bytes: 4335886
num_examples: 2904
- name: test
num_bytes: 5470162
num_examples: 3350
download_size: 18073432
dataset_size: 49200544
---
# Dataset Card for "CC_fold"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yemmy1000/cybersec_embedding_llama_chat_another | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
splits:
- name: train
num_bytes: 5750270.64103804
num_examples: 7697
download_size: 2742402
dataset_size: 5750270.64103804
---
# Dataset Card for "cybersec_embedding_llama_chat_another"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TinyPixel/open-assistant | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15974425
num_examples: 9823
download_size: 9020438
dataset_size: 15974425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "open-assistant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/final_train_v2_300000 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: input_text
dtype: string
- name: target_text
dtype: string
- name: prefix
dtype: string
splits:
- name: train
num_bytes: 9152502.3
num_examples: 27000
- name: test
num_bytes: 1016944.7
num_examples: 3000
download_size: 4455484
dataset_size: 10169447.0
---
# Dataset Card for "final_train_v2_300000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stancampbell3/seashellanalytics_background_dataset | ---
license: lgpl-3.0
---
|
316usman/thematic2a_rr_embed | ---
dataset_info:
features:
- name: text
dtype: string
- name: document_url
dtype: string
- name: source_url
dtype: string
splits:
- name: train
num_bytes: 53291241
num_examples: 84993
download_size: 18284689
dataset_size: 53291241
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Carapeticof/Carapeticof | ---
license: openrail
---
|
mcemilg/IronyTR | ---
task_categories:
- text-classification
language:
- tr
---
Homepage: https://github.com/teghub/IronyTR
Labels:
- 0: non-ironic
- 1: ironic
|
tuanacanal/Reviews-ds-2 | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 134596742.14721414
num_examples: 362520
- name: validation
num_bytes: 14955564.852785867
num_examples: 40281
download_size: 95516967
dataset_size: 149552307.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
distilled-from-one-sec-cv12/chunk_48 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1160791684
num_examples: 226187
download_size: 1182974938
dataset_size: 1160791684
---
# Dataset Card for "chunk_48"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pancake/few_shot_datasets | ---
license: mit
---
# Five standard datasets for few-shot classification
- *miniImageNet*. It contains 100 classes with 600 images in each class, which are built upon the ImageNet dataset. The 100 classes are divided into 64, 16, 20 for meta-training, meta-validation and meta-testing, respectively.
- *tieredImageNet*. TieredImageNet is also a subset of ImageNet, which includes 608 classes from 34 super-classes. Compared with miniImageNet, the splits of meta-training(20), meta-validation(6) and meta-testing(8) are set according to the super-classes to enlarge the domain difference between training and testing phase. The dataset also include more images for training and evaluation.
- *CIFAR-FS*. CIFAR-FS is divided from CIFAR-100, which consists of 60,000 images in 100 categories. The CIFAR-FS is divided into 64, 16 and 20 for training, validation, and evaluation, respectively.
- *FC100*. FC100 is also divided from CIFAR-100, which is more difficult because it is more diverse. The FC100 uses a split similar to tieredImageNet, where train, validation, and test splits contain 60, 20, and 20 classes.
- *CUB*. CUB-200-2011 (CUB) is a fine-grained dataset of 200 bird species with total 11,788 images. It is is randomly divided into three disjoint sets of the training set (100 classes), validation set (50 classes), and testing set (50 classes). |
CVasNLPExperiments/Hatefulmemes_test_google_flan_t5_xxl_mode_T_C_A_rices_ns_1000 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
sequence: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0
num_bytes: 620592
num_examples: 1000
download_size: 111620
dataset_size: 620592
---
# Dataset Card for "Hatefulmemes_test_google_flan_t5_xxl_mode_T_C_A_rices_ns_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
NeelNanda/c4-10k | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
splits:
- name: train
num_bytes: 21970889
num_examples: 10000
download_size: 13645542
dataset_size: 21970889
---
# Dataset Card for "c4-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_h2oai__h2o-danube-1.8b-base | ---
pretty_name: Evaluation run of h2oai/h2o-danube-1.8b-base
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_h2oai__h2o-danube-1.8b-base\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-01T22:53:48.852088](https://huggingface.co/datasets/open-llm-leaderboard/details_h2oai__h2o-danube-1.8b-base/blob/main/results_2024-02-01T22-53-48.852088.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26739343781347724,\n\
\ \"acc_stderr\": 0.031037633875846887,\n \"acc_norm\": 0.2690397947420433,\n\
\ \"acc_norm_stderr\": 0.03180448205346714,\n \"mc1\": 0.20195838433292534,\n\
\ \"mc1_stderr\": 0.014053957441512348,\n \"mc2\": 0.3386425348954068,\n\
\ \"mc2_stderr\": 0.01334349743426728\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.35494880546075086,\n \"acc_stderr\": 0.013983036904094094,\n\
\ \"acc_norm\": 0.39419795221843,\n \"acc_norm_stderr\": 0.014280522667467325\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5134435371439953,\n\
\ \"acc_stderr\": 0.004987977492042154,\n \"acc_norm\": 0.6957777335192192,\n\
\ \"acc_norm_stderr\": 0.004591369853276529\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768081,\n \
\ \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768081\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2814814814814815,\n\
\ \"acc_stderr\": 0.03885004245800255,\n \"acc_norm\": 0.2814814814814815,\n\
\ \"acc_norm_stderr\": 0.03885004245800255\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.3092105263157895,\n \"acc_stderr\": 0.03761070869867479,\n\
\ \"acc_norm\": 0.3092105263157895,\n \"acc_norm_stderr\": 0.03761070869867479\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.2,\n\
\ \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.2,\n \
\ \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.21509433962264152,\n \"acc_stderr\": 0.02528839450289137,\n\
\ \"acc_norm\": 0.21509433962264152,\n \"acc_norm_stderr\": 0.02528839450289137\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n\
\ \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n\
\ \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \
\ \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n\
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.27167630057803466,\n\
\ \"acc_stderr\": 0.03391750322321659,\n \"acc_norm\": 0.27167630057803466,\n\
\ \"acc_norm_stderr\": 0.03391750322321659\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.20588235294117646,\n \"acc_stderr\": 0.04023382273617747,\n\
\ \"acc_norm\": 0.20588235294117646,\n \"acc_norm_stderr\": 0.04023382273617747\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n\
\ \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.19148936170212766,\n \"acc_stderr\": 0.025722149992637795,\n\
\ \"acc_norm\": 0.19148936170212766,\n \"acc_norm_stderr\": 0.025722149992637795\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2543859649122807,\n\
\ \"acc_stderr\": 0.040969851398436695,\n \"acc_norm\": 0.2543859649122807,\n\
\ \"acc_norm_stderr\": 0.040969851398436695\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309994,\n\
\ \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309994\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.2619047619047619,\n \"acc_stderr\": 0.02264421261552521,\n \"\
acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.02264421261552521\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.19047619047619047,\n\
\ \"acc_stderr\": 0.03512207412302054,\n \"acc_norm\": 0.19047619047619047,\n\
\ \"acc_norm_stderr\": 0.03512207412302054\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.267741935483871,\n\
\ \"acc_stderr\": 0.025189006660212388,\n \"acc_norm\": 0.267741935483871,\n\
\ \"acc_norm_stderr\": 0.025189006660212388\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.3103448275862069,\n \"acc_stderr\": 0.03255086769970103,\n\
\ \"acc_norm\": 0.3103448275862069,\n \"acc_norm_stderr\": 0.03255086769970103\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.28484848484848485,\n \"acc_stderr\": 0.035243908445117836,\n\
\ \"acc_norm\": 0.28484848484848485,\n \"acc_norm_stderr\": 0.035243908445117836\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.23737373737373738,\n \"acc_stderr\": 0.03031371053819889,\n \"\
acc_norm\": 0.23737373737373738,\n \"acc_norm_stderr\": 0.03031371053819889\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.19689119170984457,\n \"acc_stderr\": 0.02869787397186069,\n\
\ \"acc_norm\": 0.19689119170984457,\n \"acc_norm_stderr\": 0.02869787397186069\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.23076923076923078,\n \"acc_stderr\": 0.021362027725222717,\n\
\ \"acc_norm\": 0.23076923076923078,\n \"acc_norm_stderr\": 0.021362027725222717\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \
\ \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.18067226890756302,\n \"acc_stderr\": 0.024991964966600753,\n\
\ \"acc_norm\": 0.18067226890756302,\n \"acc_norm_stderr\": 0.024991964966600753\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.2781456953642384,\n \"acc_stderr\": 0.03658603262763743,\n \"\
acc_norm\": 0.2781456953642384,\n \"acc_norm_stderr\": 0.03658603262763743\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.21834862385321102,\n \"acc_stderr\": 0.017712600528722738,\n \"\
acc_norm\": 0.21834862385321102,\n \"acc_norm_stderr\": 0.017712600528722738\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.36574074074074076,\n \"acc_stderr\": 0.03284738857647205,\n \"\
acc_norm\": 0.36574074074074076,\n \"acc_norm_stderr\": 0.03284738857647205\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.25980392156862747,\n \"acc_stderr\": 0.030778554678693264,\n \"\
acc_norm\": 0.25980392156862747,\n \"acc_norm_stderr\": 0.030778554678693264\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.25316455696202533,\n \"acc_stderr\": 0.028304657943035303,\n \
\ \"acc_norm\": 0.25316455696202533,\n \"acc_norm_stderr\": 0.028304657943035303\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.2242152466367713,\n\
\ \"acc_stderr\": 0.027991534258519527,\n \"acc_norm\": 0.2242152466367713,\n\
\ \"acc_norm_stderr\": 0.027991534258519527\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.22137404580152673,\n \"acc_stderr\": 0.0364129708131373,\n\
\ \"acc_norm\": 0.22137404580152673,\n \"acc_norm_stderr\": 0.0364129708131373\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.2727272727272727,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.23148148148148148,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n\
\ \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.23214285714285715,\n\
\ \"acc_stderr\": 0.040073418097558065,\n \"acc_norm\": 0.23214285714285715,\n\
\ \"acc_norm_stderr\": 0.040073418097558065\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.1650485436893204,\n \"acc_stderr\": 0.036756688322331886,\n\
\ \"acc_norm\": 0.1650485436893204,\n \"acc_norm_stderr\": 0.036756688322331886\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2777777777777778,\n\
\ \"acc_stderr\": 0.029343114798094448,\n \"acc_norm\": 0.2777777777777778,\n\
\ \"acc_norm_stderr\": 0.029343114798094448\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \
\ \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2515964240102171,\n\
\ \"acc_stderr\": 0.015517322365529614,\n \"acc_norm\": 0.2515964240102171,\n\
\ \"acc_norm_stderr\": 0.015517322365529614\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.2658959537572254,\n \"acc_stderr\": 0.023786203255508283,\n\
\ \"acc_norm\": 0.2658959537572254,\n \"acc_norm_stderr\": 0.023786203255508283\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n\
\ \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n\
\ \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.02473998135511359,\n\
\ \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.02473998135511359\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.3054662379421222,\n\
\ \"acc_stderr\": 0.02616058445014049,\n \"acc_norm\": 0.3054662379421222,\n\
\ \"acc_norm_stderr\": 0.02616058445014049\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.28703703703703703,\n \"acc_stderr\": 0.02517104191530968,\n\
\ \"acc_norm\": 0.28703703703703703,\n \"acc_norm_stderr\": 0.02517104191530968\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.2801418439716312,\n \"acc_stderr\": 0.026789172351140242,\n \
\ \"acc_norm\": 0.2801418439716312,\n \"acc_norm_stderr\": 0.026789172351140242\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23663624511082137,\n\
\ \"acc_stderr\": 0.010855137351572728,\n \"acc_norm\": 0.23663624511082137,\n\
\ \"acc_norm_stderr\": 0.010855137351572728\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.39338235294117646,\n \"acc_stderr\": 0.02967428828131118,\n\
\ \"acc_norm\": 0.39338235294117646,\n \"acc_norm_stderr\": 0.02967428828131118\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.24836601307189543,\n \"acc_stderr\": 0.017479487001364764,\n \
\ \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.017479487001364764\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.22727272727272727,\n\
\ \"acc_stderr\": 0.04013964554072775,\n \"acc_norm\": 0.22727272727272727,\n\
\ \"acc_norm_stderr\": 0.04013964554072775\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.37551020408163266,\n \"acc_stderr\": 0.031001209039894843,\n\
\ \"acc_norm\": 0.37551020408163266,\n \"acc_norm_stderr\": 0.031001209039894843\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.24378109452736318,\n\
\ \"acc_stderr\": 0.030360490154014652,\n \"acc_norm\": 0.24378109452736318,\n\
\ \"acc_norm_stderr\": 0.030360490154014652\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2891566265060241,\n\
\ \"acc_stderr\": 0.03529486801511115,\n \"acc_norm\": 0.2891566265060241,\n\
\ \"acc_norm_stderr\": 0.03529486801511115\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.28654970760233917,\n \"acc_stderr\": 0.03467826685703826,\n\
\ \"acc_norm\": 0.28654970760233917,\n \"acc_norm_stderr\": 0.03467826685703826\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.20195838433292534,\n\
\ \"mc1_stderr\": 0.014053957441512348,\n \"mc2\": 0.3386425348954068,\n\
\ \"mc2_stderr\": 0.01334349743426728\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6448303078137332,\n \"acc_stderr\": 0.013450047479569254\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.014404852160727824,\n \
\ \"acc_stderr\": 0.003282055917136914\n }\n}\n```"
repo_url: https://huggingface.co/h2oai/h2o-danube-1.8b-base
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|arc:challenge|25_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|gsm8k|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hellaswag|10_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-53-48.852088.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T22-53-48.852088.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- '**/details_harness|winogrande|5_2024-02-01T22-53-48.852088.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-01T22-53-48.852088.parquet'
- config_name: results
data_files:
- split: 2024_02_01T22_53_48.852088
path:
- results_2024-02-01T22-53-48.852088.parquet
- split: latest
path:
- results_2024-02-01T22-53-48.852088.parquet
---
# Dataset Card for Evaluation run of h2oai/h2o-danube-1.8b-base
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_h2oai__h2o-danube-1.8b-base",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T22:53:48.852088](https://huggingface.co/datasets/open-llm-leaderboard/details_h2oai__h2o-danube-1.8b-base/blob/main/results_2024-02-01T22-53-48.852088.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.26739343781347724,
"acc_stderr": 0.031037633875846887,
"acc_norm": 0.2690397947420433,
"acc_norm_stderr": 0.03180448205346714,
"mc1": 0.20195838433292534,
"mc1_stderr": 0.014053957441512348,
"mc2": 0.3386425348954068,
"mc2_stderr": 0.01334349743426728
},
"harness|arc:challenge|25": {
"acc": 0.35494880546075086,
"acc_stderr": 0.013983036904094094,
"acc_norm": 0.39419795221843,
"acc_norm_stderr": 0.014280522667467325
},
"harness|hellaswag|10": {
"acc": 0.5134435371439953,
"acc_stderr": 0.004987977492042154,
"acc_norm": 0.6957777335192192,
"acc_norm_stderr": 0.004591369853276529
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768081,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768081
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.03885004245800255,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.03885004245800255
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3092105263157895,
"acc_stderr": 0.03761070869867479,
"acc_norm": 0.3092105263157895,
"acc_norm_stderr": 0.03761070869867479
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.21509433962264152,
"acc_stderr": 0.02528839450289137,
"acc_norm": 0.21509433962264152,
"acc_norm_stderr": 0.02528839450289137
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.27167630057803466,
"acc_stderr": 0.03391750322321659,
"acc_norm": 0.27167630057803466,
"acc_norm_stderr": 0.03391750322321659
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.20588235294117646,
"acc_stderr": 0.04023382273617747,
"acc_norm": 0.20588235294117646,
"acc_norm_stderr": 0.04023382273617747
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.19148936170212766,
"acc_stderr": 0.025722149992637795,
"acc_norm": 0.19148936170212766,
"acc_norm_stderr": 0.025722149992637795
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2543859649122807,
"acc_stderr": 0.040969851398436695,
"acc_norm": 0.2543859649122807,
"acc_norm_stderr": 0.040969851398436695
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309994,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309994
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2619047619047619,
"acc_stderr": 0.02264421261552521,
"acc_norm": 0.2619047619047619,
"acc_norm_stderr": 0.02264421261552521
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.19047619047619047,
"acc_stderr": 0.03512207412302054,
"acc_norm": 0.19047619047619047,
"acc_norm_stderr": 0.03512207412302054
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.267741935483871,
"acc_stderr": 0.025189006660212388,
"acc_norm": 0.267741935483871,
"acc_norm_stderr": 0.025189006660212388
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3103448275862069,
"acc_stderr": 0.03255086769970103,
"acc_norm": 0.3103448275862069,
"acc_norm_stderr": 0.03255086769970103
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.28484848484848485,
"acc_stderr": 0.035243908445117836,
"acc_norm": 0.28484848484848485,
"acc_norm_stderr": 0.035243908445117836
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.23737373737373738,
"acc_stderr": 0.03031371053819889,
"acc_norm": 0.23737373737373738,
"acc_norm_stderr": 0.03031371053819889
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.19689119170984457,
"acc_stderr": 0.02869787397186069,
"acc_norm": 0.19689119170984457,
"acc_norm_stderr": 0.02869787397186069
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.23076923076923078,
"acc_stderr": 0.021362027725222717,
"acc_norm": 0.23076923076923078,
"acc_norm_stderr": 0.021362027725222717
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.02684205787383371,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.02684205787383371
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.18067226890756302,
"acc_stderr": 0.024991964966600753,
"acc_norm": 0.18067226890756302,
"acc_norm_stderr": 0.024991964966600753
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2781456953642384,
"acc_stderr": 0.03658603262763743,
"acc_norm": 0.2781456953642384,
"acc_norm_stderr": 0.03658603262763743
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.21834862385321102,
"acc_stderr": 0.017712600528722738,
"acc_norm": 0.21834862385321102,
"acc_norm_stderr": 0.017712600528722738
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.36574074074074076,
"acc_stderr": 0.03284738857647205,
"acc_norm": 0.36574074074074076,
"acc_norm_stderr": 0.03284738857647205
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25980392156862747,
"acc_stderr": 0.030778554678693264,
"acc_norm": 0.25980392156862747,
"acc_norm_stderr": 0.030778554678693264
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.25316455696202533,
"acc_stderr": 0.028304657943035303,
"acc_norm": 0.25316455696202533,
"acc_norm_stderr": 0.028304657943035303
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.2242152466367713,
"acc_stderr": 0.027991534258519527,
"acc_norm": 0.2242152466367713,
"acc_norm_stderr": 0.027991534258519527
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.22137404580152673,
"acc_stderr": 0.0364129708131373,
"acc_norm": 0.22137404580152673,
"acc_norm_stderr": 0.0364129708131373
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2727272727272727,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.2727272727272727,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3006134969325153,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.3006134969325153,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.23214285714285715,
"acc_stderr": 0.040073418097558065,
"acc_norm": 0.23214285714285715,
"acc_norm_stderr": 0.040073418097558065
},
"harness|hendrycksTest-management|5": {
"acc": 0.1650485436893204,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.1650485436893204,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.029343114798094448,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.029343114798094448
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322695,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322695
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2515964240102171,
"acc_stderr": 0.015517322365529614,
"acc_norm": 0.2515964240102171,
"acc_norm_stderr": 0.015517322365529614
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2658959537572254,
"acc_stderr": 0.023786203255508283,
"acc_norm": 0.2658959537572254,
"acc_norm_stderr": 0.023786203255508283
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24836601307189543,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.24836601307189543,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.3054662379421222,
"acc_stderr": 0.02616058445014049,
"acc_norm": 0.3054662379421222,
"acc_norm_stderr": 0.02616058445014049
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.28703703703703703,
"acc_stderr": 0.02517104191530968,
"acc_norm": 0.28703703703703703,
"acc_norm_stderr": 0.02517104191530968
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2801418439716312,
"acc_stderr": 0.026789172351140242,
"acc_norm": 0.2801418439716312,
"acc_norm_stderr": 0.026789172351140242
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23663624511082137,
"acc_stderr": 0.010855137351572728,
"acc_norm": 0.23663624511082137,
"acc_norm_stderr": 0.010855137351572728
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.39338235294117646,
"acc_stderr": 0.02967428828131118,
"acc_norm": 0.39338235294117646,
"acc_norm_stderr": 0.02967428828131118
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.24836601307189543,
"acc_stderr": 0.017479487001364764,
"acc_norm": 0.24836601307189543,
"acc_norm_stderr": 0.017479487001364764
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.22727272727272727,
"acc_stderr": 0.04013964554072775,
"acc_norm": 0.22727272727272727,
"acc_norm_stderr": 0.04013964554072775
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.37551020408163266,
"acc_stderr": 0.031001209039894843,
"acc_norm": 0.37551020408163266,
"acc_norm_stderr": 0.031001209039894843
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.24378109452736318,
"acc_stderr": 0.030360490154014652,
"acc_norm": 0.24378109452736318,
"acc_norm_stderr": 0.030360490154014652
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2891566265060241,
"acc_stderr": 0.03529486801511115,
"acc_norm": 0.2891566265060241,
"acc_norm_stderr": 0.03529486801511115
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.28654970760233917,
"acc_stderr": 0.03467826685703826,
"acc_norm": 0.28654970760233917,
"acc_norm_stderr": 0.03467826685703826
},
"harness|truthfulqa:mc|0": {
"mc1": 0.20195838433292534,
"mc1_stderr": 0.014053957441512348,
"mc2": 0.3386425348954068,
"mc2_stderr": 0.01334349743426728
},
"harness|winogrande|5": {
"acc": 0.6448303078137332,
"acc_stderr": 0.013450047479569254
},
"harness|gsm8k|5": {
"acc": 0.014404852160727824,
"acc_stderr": 0.003282055917136914
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
xaviviro/oasst2_ca | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int64
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: validation
num_bytes: 7012662
num_examples: 6598
- name: train
num_bytes: 137117942
num_examples: 128572
download_size: 50116080
dataset_size: 144130604
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
language:
- ca
license: apache-2.0
--- |
kz919/open-orca-flan-50k-synthetic-5-models | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: synthetic open-orca flan
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: task
dtype: string
- name: ignos-Mistral-T5-7B-v1
dtype: string
- name: cognAI-lil-c3po
dtype: string
- name: viethq188-Rabbit-7B-DPO-Chat
dtype: string
- name: cookinai-DonutLM-v1
dtype: string
- name: v1olet-v1olet-merged-dpo-7B
dtype: string
splits:
- name: train
num_bytes: 103557970
num_examples: 50000
download_size: 47451297
dataset_size: 103557970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Open-Orca-FLAN-50K-Synthetic-5-Models Dataset Card
### Dataset Summary
The Open-Orca-FLAN-50K-Synthetic-5-Models dataset is a large-scale, synthetic dataset based on 50K filtered examples from [Open-Orca/Flan](https://huggingface.co/datasets/Open-Orca/FLAN) . It contains 50,000 examples, each consisting of a prompt, a completion, and the corresponding task. Additionally, it includes model-generated responses from five different models: [ignos-Mistral-T5-7B-v1](https://huggingface.co/ignos/Mistral-T5-7B-v1), [cognAI-lil-c3po](https://huggingface.co/cognAI/lil-c3po), [viethq188-Rabbit-7B-DPO-Chat](https://huggingface.co/viethq188/Rabbit-7B-DPO-Chat), [cookinai-DonutLM-v1](https://huggingface.co/cookinai/DonutLM-v1), and [v1olet-v1olet-merged-dpo-7B](https://huggingface.co/v1olet/v1olet_merged_dpo_7B). This dataset is particularly useful for research in natural language understanding, language model comparison, and AI-generated text analysis.
### Supported Tasks
- **Natural Language Understanding:** The dataset can be used to train models to understand and generate human-like text.
- **Model Comparison:** Researchers can compare the performance of different language models using this dataset.
- **CoE Router Reward Modeling:** The responses from the 5 models can be used to train the routing mechanism given a query
- **Text Generation:** It's suitable for training and evaluating models on text generation tasks.
### Languages
The dataset is primarily in English.
## Dataset Structure
### Data Instances
A typical data instance comprises the following fields:
- `prompt`: The input prompt (string).
- `completion`: The expected completion of the prompt (string).
- `task`: The specific task or category the example belongs to (string).
- Model-generated responses from five different models, each in a separate field.
### Data Fields
- `prompt`: A string containing the input prompt.
- `completion`: A string containing the expected response or completion to the prompt.
- `task`: A string indicating the type of task.
- `ignos-Mistral-T5-7B-v1`: Model-generated response from ignos-Mistral-T5-7B-v1.
- `cognAI-lil-c3po`: Model-generated response from cognAI-lil-c3po.
- `viethq188-Rabbit-7B-DPO-Chat`: Model-generated response from viethq188-Rabbit-7B-DPO-Chat.
- `cookinai-DonutLM-v1`: Model-generated response from cookinai-DonutLM-v1.
- `v1olet-v1olet-merged-dpo-7B`: Model-generated response from v1olet-v1olet-merged-dpo-7B.
### Data Splits
The dataset is not split into traditional training, validation, and test sets. It contains 50,000 examples in a single batch, designed for evaluation and comparison purposes.
## Dataset Creation
### Curation Rationale
This dataset was curated to provide a diverse and extensive set of prompts and completions, along with responses from various state-of-the-art language models, for comprehensive evaluation and comparison in language understanding and generation tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was synthetically generated, ensuring a wide variety of prompts, tasks, and model-generated responses.
#### Who are the source language producers?
The prompts and completions are from a known dataset, and the responses are produced by the specified language models.
### Annotations
The dataset does not include manual annotations. The responses are generated by the models listed.
### Personal and Sensitive Information
Since the dataset is synthetic, it does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the advancement of natural language processing by providing a rich source for model comparison and analysis.
### Discussion of Biases
As the dataset is generated by AI models, it may inherit biases present in those models. Users should be aware of this when analyzing the data.
### Other Known Limitations
The effectiveness of the dataset is contingent on the quality and diversity of the synthetic data and the responses generated by the models.
### Licensing Information
Please refer to the repository for licensing information.
### Citation Information
```
@inproceedings{open-orca-flan-50k-synthetic-5-models,
title={Open-Orca-FLAN-50K-Synthetic-5-Models},
author={Kaizhao Liang}
}
``` |
GroundCtrl/colonogamer | ---
license: openrail
---
|
akkijp/test | ---
license: apache-2.0
task_categories:
- text-classification
language:
- ja
tags:
- chemistry
pretty_name: aaaaabbbbb
size_categories:
- 1K<n<10K
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.