datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
elvishelvis6/clear | ---
license: cc-by-4.0
---
|
winogrande | ---
language:
- en
paperswithcode_id: winogrande
pretty_name: WinoGrande
dataset_info:
- config_name: winogrande_xs
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 20704
num_examples: 160
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 412552
- config_name: winogrande_s
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 82308
num_examples: 640
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 474156
- config_name: winogrande_m
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 329001
num_examples: 2558
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 720849
- config_name: winogrande_l
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1319576
num_examples: 10234
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 1711424
- config_name: winogrande_xl
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 5185832
num_examples: 40398
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 5577680
- config_name: winogrande_debiased
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1203420
num_examples: 9248
- name: test
num_bytes: 227649
num_examples: 1767
- name: validation
num_bytes: 164199
num_examples: 1267
download_size: 3395492
dataset_size: 1595268
---
# Dataset Card for "winogrande"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://leaderboard.allenai.org/winogrande/submissions/get-started](https://leaderboard.allenai.org/winogrande/submissions/get-started)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 20.37 MB
- **Size of the generated dataset:** 10.50 MB
- **Total amount of disk used:** 30.87 MB
### Dataset Summary
WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern
2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a
fill-in-a-blank task with binary options, the goal is to choose the right option for a given sentence which requires
commonsense reasoning.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### winogrande_debiased
- **Size of downloaded dataset files:** 3.40 MB
- **Size of the generated dataset:** 1.59 MB
- **Total amount of disk used:** 4.99 MB
An example of 'train' looks as follows.
```
```
#### winogrande_l
- **Size of downloaded dataset files:** 3.40 MB
- **Size of the generated dataset:** 1.71 MB
- **Total amount of disk used:** 5.11 MB
An example of 'validation' looks as follows.
```
```
#### winogrande_m
- **Size of downloaded dataset files:** 3.40 MB
- **Size of the generated dataset:** 0.72 MB
- **Total amount of disk used:** 4.12 MB
An example of 'validation' looks as follows.
```
```
#### winogrande_s
- **Size of downloaded dataset files:** 3.40 MB
- **Size of the generated dataset:** 0.47 MB
- **Total amount of disk used:** 3.87 MB
An example of 'validation' looks as follows.
```
```
#### winogrande_xl
- **Size of downloaded dataset files:** 3.40 MB
- **Size of the generated dataset:** 5.58 MB
- **Total amount of disk used:** 8.98 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### winogrande_debiased
- `sentence`: a `string` feature.
- `option1`: a `string` feature.
- `option2`: a `string` feature.
- `answer`: a `string` feature.
#### winogrande_l
- `sentence`: a `string` feature.
- `option1`: a `string` feature.
- `option2`: a `string` feature.
- `answer`: a `string` feature.
#### winogrande_m
- `sentence`: a `string` feature.
- `option1`: a `string` feature.
- `option2`: a `string` feature.
- `answer`: a `string` feature.
#### winogrande_s
- `sentence`: a `string` feature.
- `option1`: a `string` feature.
- `option2`: a `string` feature.
- `answer`: a `string` feature.
#### winogrande_xl
- `sentence`: a `string` feature.
- `option1`: a `string` feature.
- `option2`: a `string` feature.
- `answer`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------------------|----:|---------:|---:|
|winogrande_debiased| 9248| 1267|1767|
|winogrande_l |10234| 1267|1767|
|winogrande_m | 2558| 1267|1767|
|winogrande_s | 640| 1267|1767|
|winogrande_xl |40398| 1267|1767|
|winogrande_xs | 160| 1267|1767|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{ai2:winogrande,
title = {WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
authors={Keisuke, Sakaguchi and Ronan, Le Bras and Chandra, Bhagavatula and Yejin, Choi
},
year={2019}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@TevenLeScao](https://github.com/TevenLeScao), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
vector/test_demo | ---
annotaeators:
- found
language_creators:
- found
language:
- cn
---
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
faisal@cs.columbia.edu, esdurmus@stanford.edu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-3.0: Creative Commons Attribution 3.0 Unported
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
erfanzar/ShareGPT4 | ---
dataset_info:
features:
- name: conversations
list:
- name: role
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 30322763
num_examples: 6144
download_size: 15605374
dataset_size: 30322763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# ShareGPT4 Dataset
ShareGPT4 is a cleaned version of the OpenChat-ShareGPT4 Dataset, designed for training conversational AI models. This dataset contains a collection of conversations, with each conversation consisting of two main features: role and value.
## Dataset Info
- **Features**:
- **conversations**:
- **role** (string): The role of the speaker in the conversation.
- **value** (string): The actual conversation text.
- **Splits**:
- **train**:
- Number of examples: 6144
- Size: 30,322,763 bytes
- **Download Size**: 15,605,374 bytes
- **Dataset Size**: 30,322,763 bytes
## Configs
- **Config Name**: default
- **Data Files**:
- **split**: train
- **path**: data/train-*
For more information on how to use this dataset with the Hugging Face library, please refer to their documentation. |
open-llm-leaderboard/details_Rallio67__7B-redpajama-conditional-alpha | ---
pretty_name: Evaluation run of Rallio67/7B-redpajama-conditional-alpha
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Rallio67/7B-redpajama-conditional-alpha](https://huggingface.co/Rallio67/7B-redpajama-conditional-alpha)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Rallio67__7B-redpajama-conditional-alpha\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-26T09:32:05.345460](https://huggingface.co/datasets/open-llm-leaderboard/details_Rallio67__7B-redpajama-conditional-alpha/blob/main/results_2023-10-26T09-32-05.345460.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0009437919463087249,\n\
\ \"em_stderr\": 0.00031446531194129435,\n \"f1\": 0.04856229026845656,\n\
\ \"f1_stderr\": 0.0012026937489831246,\n \"acc\": 0.33962342618029373,\n\
\ \"acc_stderr\": 0.0077937904808975484\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.0009437919463087249,\n \"em_stderr\": 0.00031446531194129435,\n\
\ \"f1\": 0.04856229026845656,\n \"f1_stderr\": 0.0012026937489831246\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0075815011372251705,\n \
\ \"acc_stderr\": 0.0023892815120772123\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6716653512233622,\n \"acc_stderr\": 0.013198299449717885\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Rallio67/7B-redpajama-conditional-alpha
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|arc:challenge|25_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_26T09_32_05.345460
path:
- '**/details_harness|drop|3_2023-10-26T09-32-05.345460.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-26T09-32-05.345460.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_26T09_32_05.345460
path:
- '**/details_harness|gsm8k|5_2023-10-26T09-32-05.345460.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-26T09-32-05.345460.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hellaswag|10_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-45-25.410527.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-18T14-45-25.410527.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-18T14-45-25.410527.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_26T09_32_05.345460
path:
- '**/details_harness|winogrande|5_2023-10-26T09-32-05.345460.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-26T09-32-05.345460.parquet'
- config_name: results
data_files:
- split: 2023_09_18T14_45_25.410527
path:
- results_2023-09-18T14-45-25.410527.parquet
- split: 2023_10_26T09_32_05.345460
path:
- results_2023-10-26T09-32-05.345460.parquet
- split: latest
path:
- results_2023-10-26T09-32-05.345460.parquet
---
# Dataset Card for Evaluation run of Rallio67/7B-redpajama-conditional-alpha
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Rallio67/7B-redpajama-conditional-alpha
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Rallio67/7B-redpajama-conditional-alpha](https://huggingface.co/Rallio67/7B-redpajama-conditional-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Rallio67__7B-redpajama-conditional-alpha",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T09:32:05.345460](https://huggingface.co/datasets/open-llm-leaderboard/details_Rallio67__7B-redpajama-conditional-alpha/blob/main/results_2023-10-26T09-32-05.345460.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0009437919463087249,
"em_stderr": 0.00031446531194129435,
"f1": 0.04856229026845656,
"f1_stderr": 0.0012026937489831246,
"acc": 0.33962342618029373,
"acc_stderr": 0.0077937904808975484
},
"harness|drop|3": {
"em": 0.0009437919463087249,
"em_stderr": 0.00031446531194129435,
"f1": 0.04856229026845656,
"f1_stderr": 0.0012026937489831246
},
"harness|gsm8k|5": {
"acc": 0.0075815011372251705,
"acc_stderr": 0.0023892815120772123
},
"harness|winogrande|5": {
"acc": 0.6716653512233622,
"acc_stderr": 0.013198299449717885
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
vwxyzjn/hh-rlhf-trl-style | ---
dataset_info:
features:
- name: info
struct:
- name: id
dtype: string
- name: post
dtype: string
- name: title
dtype: string
- name: subreddit
dtype: string
- name: site
dtype: string
- name: article
dtype: string
- name: summaries
list:
- name: text
dtype: string
- name: policy
dtype: string
- name: note
dtype: string
- name: choice
dtype: int32
- name: worker
dtype: string
- name: batch
dtype: string
- name: split
dtype: string
- name: extra
struct:
- name: confidence
dtype: int32
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 315969
num_examples: 50
- name: validation
num_bytes: 325197
num_examples: 50
download_size: 150469
dataset_size: 641166
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# TRL's Anthropic HH Dataset
We preprocess the dataset using our standard `prompt, chosen, rejected` format.
## Reproduce this dataset
1. Download the `tldr_preference.py` from the https://huggingface.co/datasets/vwxyzjn/hh-rlhf-trl-style/tree/0.1.0.
2. Run `python examples/datasets/tldr_preference.py --debug --push_to_hub`
|
CJWeiss/multilong_id_rename | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1376297066
num_examples: 3404
- name: test
num_bytes: 260869872
num_examples: 682
- name: valid
num_bytes: 206485006
num_examples: 453
download_size: 833197169
dataset_size: 1843651944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
# Dataset Card for "multilong_id_rename"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yleo/guanaco-llama2 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15401731
num_examples: 9846
download_size: 8983166
dataset_size: 15401731
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mxeval/multi-humaneval | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
splits:
- name: multi-humaneval_python
num_bytes: 165716
num_examples: 164
download_size: 67983
dataset_size: 165716
license: apache-2.0
task_categories:
- text-generation
tags:
- mxeval
- code-generation
- multi-humaneval
- humaneval
pretty_name: multi-humaneval
language:
- en
---
# Multi-HumanEval
## Table of Contents
- [multi-humaneval](#multi-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# multi-humaneval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/multi-humaneval")
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/multi-humaneval", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "HumanEval/0",
"language": "python",
"prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n",
"test": "\n\nMETADATA = {\n \"author\": \"jt\",\n \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n",
"entry_point": "has_close_elements",
"canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n",
"description": "Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- HumanXEval
- Python
- Csharp
- Go
- Java
- Javascript
- Kotlin
- Perl
- Php
- Ruby
- Scala
- Swift
- Typescript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> humaneval_python = load_dataset("mxeval/multi-humaneval", "python", split="test")
>>> example_problem = humaneval_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'HumanEval/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.636878967285156}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Dataset Curators
AWS AI Labs
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi) |
khulegu/mn_wiki | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88813927
num_examples: 23385
download_size: 40026785
dataset_size: 88813927
---
# Dataset Card for "mn_wiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/mugino_shizuri_toarumajutsunoindex | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of mugino_shizuri (To Aru Majutsu no Index)
This is the dataset of mugino_shizuri (To Aru Majutsu no Index), containing 80 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
|
arieg/cluster15_large_150 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 004093
'1': '007713'
'2': 008261
'3': 008372
'4': '013537'
'5': 013538
'6': 019760
'7': '022475'
'8': '023037'
'9': 028478
'10': 029739
'11': 035199
'12': '035544'
'13': 036959
'14': 038777
'15': 038824
'16': 038966
'17': 039378
'18': 039660
'19': 039663
'20': '041714'
'21': 042048
'22': '042146'
'23': '043516'
'24': 047826
'25': 047897
'26': 049846
'27': 049847
'28': 049856
'29': 049857
'30': 054568
'31': 055286
'32': '055430'
'33': 059654
'34': '060753'
'35': 064249
'36': '064366'
'37': '067334'
'38': 072787
'39': 075749
'40': 080693
'41': 084485
'42': 091187
'43': 091934
'44': 092124
'45': 092125
'46': 094635
'47': 097215
'48': 097585
'49': 099364
'50': 099391
'51': '103522'
'52': '106948'
'53': '110449'
'54': '113167'
'55': '113697'
'56': '117630'
'57': '126219'
'58': '126600'
'59': '126717'
'60': '128886'
'61': '130131'
'62': '130135'
'63': '132561'
'64': '133969'
'65': '134938'
'66': '134941'
'67': '135337'
'68': '135339'
'69': '135342'
'70': '143304'
'71': '144549'
'72': '147059'
'73': '148519'
'74': '148535'
splits:
- name: train
num_bytes: 649342654.5
num_examples: 11250
download_size: 638445636
dataset_size: 649342654.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anismhaddouche/test | ---
license: mit
---
|
JoshVictor/Chat-Doc-Jo | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 75357388
num_examples: 60000
download_size: 45391497
dataset_size: 75357388
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
james-burton/kick_starter_funding_all_text | ---
dataset_info:
features:
- name: name
dtype: string
- name: desc
dtype: string
- name: goal
dtype: string
- name: keywords
dtype: string
- name: disable_communication
dtype: string
- name: country
dtype: string
- name: currency
dtype: string
- name: deadline
dtype: string
- name: created_at
dtype: string
- name: final_status
dtype: int64
splits:
- name: train
num_bytes: 21884995
num_examples: 73526
- name: validation
num_bytes: 3869495
num_examples: 12976
- name: test
num_bytes: 6434631
num_examples: 21626
download_size: 0
dataset_size: 32189121
---
# Dataset Card for "kick_starter_funding_all_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ML4TSP/TSPUniformDataset | ---
license: apache-2.0
---
|
yzhuang/autotree_automl_eye_movements_sgosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 308080000
num_examples: 10000
- name: validation
num_bytes: 308080000
num_examples: 10000
download_size: 210015405
dataset_size: 616160000
---
# Dataset Card for "autotree_automl_eye_movements_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DanyCT25/rubrix | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: category
dtype: int64
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 1445808
num_examples: 5001
download_size: 0
dataset_size: 1445808
---
# Dataset Card for "rubrix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigscience-data/roots_indic-ml_pib | ---
language: ml
license: cc-by-sa-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-ml_pib
# pib
- Dataset uid: `pib`
### Description
Sentence aligned parallel corpus between 11 Indian Languages, crawled and extracted from the press information bureau
website.
### Homepage
- https://huggingface.co/datasets/pib
- http://preon.iiit.ac.in/~jerin/bhasha/
### Licensing
Creative Commons Attribution-ShareAlike 4.0 International
### Speaker Locations
### Sizes
- 0.0609 % of total
- 0.6301 % of indic-hi
- 3.2610 % of indic-ur
- 0.6029 % of indic-ta
- 3.0834 % of indic-or
- 1.9757 % of indic-mr
- 0.2181 % of indic-bn
- 1.8901 % of indic-pa
- 1.5457 % of indic-gu
- 0.4695 % of indic-ml
- 0.5767 % of indic-te
### BigScience processing steps
#### Filters applied to: indic-hi
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: indic-mr
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
cifar100 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100
dataset_info:
config_name: cifar100
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
splits:
- name: train
num_bytes: 112545106.0
num_examples: 50000
- name: test
num_bytes: 22564261.0
num_examples: 10000
download_size: 142291368
dataset_size: 135109367.0
configs:
- config_name: cifar100
data_files:
- split: train
path: cifar100/train-*
- split: test
path: cifar100/test-*
default: true
---
# Dataset Card for CIFAR-100
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Repository:**
- **Paper:** [Paper](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images
per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.
There are two labels per image - fine label (actual class) and coarse label (superclass).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|50000| 10000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
viditsorg/autotrain-data-mbart_english | ---
dataset_info:
features:
- name: autotrain_text
dtype: string
- name: autotrain_label
dtype: string
splits:
- name: train
num_bytes: 68838512
num_examples: 1600
- name: validation
num_bytes: 8686179
num_examples: 200
download_size: 43165966
dataset_size: 77524691
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-mbart_english"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fbellame/confoo | ---
license: apache-2.0
---
|
jha2ee/Sound_Spectrogram | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Sound_Drum
'1': Sound_Piano
'2': Sound_Violin
'3': airplane
'4': breathing
'5': brushing_teeth
'6': can_opening
'7': car_horn
'8': cat
'9': chainsaw
'10': chirping_birds
'11': church_bells
'12': clapping
'13': clock_alarm
'14': clock_tick
'15': coughing
'16': cow
'17': crackling_fire
'18': crickets
'19': crow
'20': crying_baby
'21': dog
'22': door_wood_creaks
'23': door_wood_knock
'24': drinking_sipping
'25': engine
'26': fireworks
'27': footsteps
'28': frog
'29': glass_breaking
'30': helicopter
'31': hen
'32': insects
'33': keyboard_typing
'34': laughing
'35': mouse_click
'36': pig
'37': pouring_water
'38': rain
'39': rooster
'40': sea_waves
'41': sheep
'42': siren
'43': sneezing
'44': snoring
'45': toilet_flush
'46': train
'47': vacuum_cleaner
'48': washing_machine
'49': water_drops
'50': wind
splits:
- name: train
num_bytes: 141766644.635
num_examples: 1981
download_size: 141547931
dataset_size: 141766644.635
task_categories:
- feature-extraction
tags:
- sound
- environment
- instrument
- effect
---
# Dataset Card for "Sound_Spectrogram"
# Questions about dataset
1. What is Spectrogram?
2. How were these converted to image?
3. Where is dataset from?
4. How can I use this?
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_aloobun__Cypher-7B | ---
pretty_name: Evaluation run of aloobun/Cypher-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aloobun/Cypher-7B](https://huggingface.co/aloobun/Cypher-7B) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aloobun__Cypher-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-04T22:24:34.715547](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Cypher-7B/blob/main/results_2024-04-04T22-24-34.715547.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6486449613762667,\n\
\ \"acc_stderr\": 0.03205716424397747,\n \"acc_norm\": 0.6492392744490298,\n\
\ \"acc_norm_stderr\": 0.032712997418463424,\n \"mc1\": 0.4430844553243574,\n\
\ \"mc1_stderr\": 0.017389730346877106,\n \"mc2\": 0.6154915386378632,\n\
\ \"mc2_stderr\": 0.01530044267019002\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6518771331058021,\n \"acc_stderr\": 0.013921008595179344,\n\
\ \"acc_norm\": 0.6945392491467577,\n \"acc_norm_stderr\": 0.013460080478002508\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.681736705835491,\n\
\ \"acc_stderr\": 0.004648503177353959,\n \"acc_norm\": 0.8626767576180043,\n\
\ \"acc_norm_stderr\": 0.003434848525388187\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\"\
: 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\"\
: 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \
\ \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n\
\ \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6867924528301886,\n \"acc_stderr\": 0.028544793319055326,\n\
\ \"acc_norm\": 0.6867924528301886,\n \"acc_norm_stderr\": 0.028544793319055326\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n\
\ \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.6416184971098265,\n\
\ \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n\
\ \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n\
\ \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42328042328042326,\n \"acc_stderr\": 0.025446365634406783,\n \"\
acc_norm\": 0.42328042328042326,\n \"acc_norm_stderr\": 0.025446365634406783\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7870967741935484,\n \"acc_stderr\": 0.023287665127268542,\n \"\
acc_norm\": 0.7870967741935484,\n \"acc_norm_stderr\": 0.023287665127268542\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"\
acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.03192271569548301,\n\
\ \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.03192271569548301\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"\
acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.02247325333276877,\n\
\ \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.02247325333276877\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6410256410256411,\n \"acc_stderr\": 0.024321738484602354,\n\
\ \"acc_norm\": 0.6410256410256411,\n \"acc_norm_stderr\": 0.024321738484602354\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \
\ \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6932773109243697,\n \"acc_stderr\": 0.029953823891887037,\n\
\ \"acc_norm\": 0.6932773109243697,\n \"acc_norm_stderr\": 0.029953823891887037\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"\
acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.01555580271359017,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.01555580271359017\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553353,\n \"\
acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553353\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233504,\n \
\ \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233504\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.031024411740572213,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.031024411740572213\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n\
\ \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n\
\ \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n\
\ \"acc_stderr\": 0.022509033937077805,\n \"acc_norm\": 0.8632478632478633,\n\
\ \"acc_norm_stderr\": 0.022509033937077805\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n\
\ \"acc_stderr\": 0.0133878957315436,\n \"acc_norm\": 0.8314176245210728,\n\
\ \"acc_norm_stderr\": 0.0133878957315436\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.02344582627654554,\n\
\ \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.02344582627654554\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4,\n\
\ \"acc_stderr\": 0.016384638410380823,\n \"acc_norm\": 0.4,\n \
\ \"acc_norm_stderr\": 0.016384638410380823\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292452,\n\
\ \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292452\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.02567025924218893,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.02567025924218893\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035457,\n\
\ \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035457\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \
\ \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46479791395045633,\n\
\ \"acc_stderr\": 0.012738547371303956,\n \"acc_norm\": 0.46479791395045633,\n\
\ \"acc_norm_stderr\": 0.012738547371303956\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n\
\ \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.684640522875817,\n \"acc_stderr\": 0.018798086284886887,\n \
\ \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.018798086284886887\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n\
\ \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n\
\ \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n\
\ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8557213930348259,\n\
\ \"acc_stderr\": 0.024845753212306053,\n \"acc_norm\": 0.8557213930348259,\n\
\ \"acc_norm_stderr\": 0.024845753212306053\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n\
\ \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160896,\n\
\ \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160896\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4430844553243574,\n\
\ \"mc1_stderr\": 0.017389730346877106,\n \"mc2\": 0.6154915386378632,\n\
\ \"mc2_stderr\": 0.01530044267019002\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8232044198895028,\n \"acc_stderr\": 0.010721923287918753\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6626231993934799,\n \
\ \"acc_stderr\": 0.013023665136222096\n }\n}\n```"
repo_url: https://huggingface.co/aloobun/Cypher-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|arc:challenge|25_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|arc:challenge|25_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|gsm8k|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|gsm8k|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hellaswag|10_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hellaswag|10_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T21-17-05.234082.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T22-24-34.715547.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-04T22-24-34.715547.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- '**/details_harness|winogrande|5_2024-04-04T21-17-05.234082.parquet'
- split: 2024_04_04T22_24_34.715547
path:
- '**/details_harness|winogrande|5_2024-04-04T22-24-34.715547.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-04T22-24-34.715547.parquet'
- config_name: results
data_files:
- split: 2024_04_04T21_17_05.234082
path:
- results_2024-04-04T21-17-05.234082.parquet
- split: 2024_04_04T22_24_34.715547
path:
- results_2024-04-04T22-24-34.715547.parquet
- split: latest
path:
- results_2024-04-04T22-24-34.715547.parquet
---
# Dataset Card for Evaluation run of aloobun/Cypher-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aloobun/Cypher-7B](https://huggingface.co/aloobun/Cypher-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aloobun__Cypher-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-04T22:24:34.715547](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__Cypher-7B/blob/main/results_2024-04-04T22-24-34.715547.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6486449613762667,
"acc_stderr": 0.03205716424397747,
"acc_norm": 0.6492392744490298,
"acc_norm_stderr": 0.032712997418463424,
"mc1": 0.4430844553243574,
"mc1_stderr": 0.017389730346877106,
"mc2": 0.6154915386378632,
"mc2_stderr": 0.01530044267019002
},
"harness|arc:challenge|25": {
"acc": 0.6518771331058021,
"acc_stderr": 0.013921008595179344,
"acc_norm": 0.6945392491467577,
"acc_norm_stderr": 0.013460080478002508
},
"harness|hellaswag|10": {
"acc": 0.681736705835491,
"acc_stderr": 0.004648503177353959,
"acc_norm": 0.8626767576180043,
"acc_norm_stderr": 0.003434848525388187
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5175438596491229,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.5175438596491229,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42328042328042326,
"acc_stderr": 0.025446365634406783,
"acc_norm": 0.42328042328042326,
"acc_norm_stderr": 0.025446365634406783
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.023287665127268542,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.023287665127268542
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5073891625615764,
"acc_stderr": 0.035176035403610105,
"acc_norm": 0.5073891625615764,
"acc_norm_stderr": 0.035176035403610105
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.03192271569548301,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.03192271569548301
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.02247325333276877,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.02247325333276877
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6410256410256411,
"acc_stderr": 0.024321738484602354,
"acc_norm": 0.6410256410256411,
"acc_norm_stderr": 0.024321738484602354
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.02882088466625326,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.02882088466625326
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6932773109243697,
"acc_stderr": 0.029953823891887037,
"acc_norm": 0.6932773109243697,
"acc_norm_stderr": 0.029953823891887037
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.01555580271359017,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.01555580271359017
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553353,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553353
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233504,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233504
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.031024411740572213,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.031024411740572213
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077805,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077805
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8314176245210728,
"acc_stderr": 0.0133878957315436,
"acc_norm": 0.8314176245210728,
"acc_norm_stderr": 0.0133878957315436
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.02344582627654554,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.02344582627654554
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4,
"acc_stderr": 0.016384638410380823,
"acc_norm": 0.4,
"acc_norm_stderr": 0.016384638410380823
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292452,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292452
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.02567025924218893,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.02567025924218893
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035457,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46479791395045633,
"acc_stderr": 0.012738547371303956,
"acc_norm": 0.46479791395045633,
"acc_norm_stderr": 0.012738547371303956
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6985294117647058,
"acc_stderr": 0.027875982114273168,
"acc_norm": 0.6985294117647058,
"acc_norm_stderr": 0.027875982114273168
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.018798086284886887,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.018798086284886887
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274648,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274648
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8557213930348259,
"acc_stderr": 0.024845753212306053,
"acc_norm": 0.8557213930348259,
"acc_norm_stderr": 0.024845753212306053
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.0348735088019777,
"acc_norm": 0.86,
"acc_norm_stderr": 0.0348735088019777
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160896,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160896
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4430844553243574,
"mc1_stderr": 0.017389730346877106,
"mc2": 0.6154915386378632,
"mc2_stderr": 0.01530044267019002
},
"harness|winogrande|5": {
"acc": 0.8232044198895028,
"acc_stderr": 0.010721923287918753
},
"harness|gsm8k|5": {
"acc": 0.6626231993934799,
"acc_stderr": 0.013023665136222096
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
MartinKu/test_temp | ---
dataset_info:
features:
- name: text
dtype: string
- name: S_V
sequence: string
- name: S_V_position
sequence: int64
- name: O_C
sequence: string
- name: O_C_position
sequence: int64
splits:
- name: train
num_bytes: 9100
num_examples: 40
download_size: 8683
dataset_size: 9100
---
# Dataset Card for "test_temp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LIAGM/DAEFR_test_datasets | ---
license: apache-2.0
---
We evaluate DAEFR on one synthetic dataset **CelebA-Test**, and two real-world datasets **LFW-Test** and **WIDER-Test**.
<table>
<tr>
<th>Datasets</th>
<th>Filename</th>
<th>Short Description</th>
<th>Source</th>
</tr>
<tr>
<td>CelebA-Test (HQ)</td>
<td>celeba_512_validation.zip</td>
<td>3000 (HQ) ground truth images for evaluation</td>
<td><a href="https://github.com/wzhouxiff/RestoreFormer">RestoreFormer</a></td>
</tr>
<tr>
<td>CelebA-Test (LQ)</td>
<td>self_celeba_512_v2.zip</td>
<td>3000 (LQ) synthetic images for testing</td>
<td>Ourselves</td>
</tr>
<tr>
<td>LFW-Test (LQ)</td>
<td>lfw_cropped_faces.zip</td>
<td>1711 real-world images for testing</td>
<td><a href="https://github.com/TencentARC/VQFR">VQFR</a></td>
</tr>
<tr>
<td>WIDER-Test (LQ)</td>
<td>Wider-Test.zip</td>
<td>970 real-world images for testing</td>
<td><a href="https://shangchenzhou.com/projects/CodeFormer/">CodeFormer</a></td>
</tr>
</table> |
argilla/ultrafeedback-multi-binarized-preferences-cleaned | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: UltraFeedback Multi-Binarized Preferences Cleaned
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
splits:
- name: train
num_bytes: 738122612
num_examples: 157675
download_size: 196872615
dataset_size: 738122612
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- dpo
- preference
- ultrafeedback
---
# UltraFeedback - Multi-Binarized using the Average of Preference Ratings (Cleaned)
This dataset represents a new iteration on top of [`argilla/ultrafeedback-binarized-preferences-cleaned`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned),
and has been created to explore whether DPO fine-tuning with more than one rejection per chosen response helps the model perform better in the
AlpacaEval, MT-Bench, and LM Eval Harness benchmarks.
Read more about Argilla's approach towards UltraFeedback binarization at [`argilla/ultrafeedback-binarized-preferences/README.md`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences/blob/main/README.md),
and about the parent approach of this one at [`argilla/ultrafeedback-binarized-preferences-cleaned/README.md`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned/blob/main/README.md),
## Differences with `argilla/ultrafeedback-binarized-preferences`
Thanks to the recent issue identified by [AllenAI](https://huggingface.co/allenai) related to the TruthfulQA contamination within the
original UltraFeedback dataset due to some prompts being reused from the TruthfulQA dataset (used for benchmarking
in the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) from HuggingFace H4), we also decided
to follow AllenAI's advice and remove those from the UltraFeedback dataset that we binarized using a completely different approach, which
implied using the average of the preference ratings rather than the critique overall score, as
[`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) did.
Besides that, we also saw that not only the rows with the `source=truthful_qa` were contamined (for obvious reasons), but also some
coming from ShareGPT, so we also removed those doing a left join with both subsets from the [`truthful_qa`](https://huggingface.co/datasets/truthful_qa) dataset.
Finally, we also modified the formatting to be aligned with both [`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized),
and [`allenai/ultrafeedback_binarized_cleaned`](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) in order to ease
the integration within the [`huggingface/alignment-handbook`](https://github.com/huggingface/alignment-handbook) so that the formatting is standardized.
## Differences with `argilla/ultrafeedback-binarized-preferences-cleaned`
We kept the same pre-processing steps for cleaning the [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback) as well as the same preference
rating calculation towards deciding whether to select a response as chosen or rejected, the difference is that this one is using a multi-binarization approach, where each chosen response has its own row with one different row
depending on the number of rejected responses, meaning that for the same prompt we will have the same chosen response and different rejected responses.
## Reproduce
<a target="_blank" href="https://colab.research.google.com/drive/1CTvQq_HmwuUPTuAboGLFDtqcel4xef-g?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
To reproduce the data processing combining both our approach and the suggestions from HuggingFace H4 w.r.t. the formatting and the ones from AllenAI to
remove the TruthfulQA contamination, feel free to run the attached Colab Notebook or just view it at [`notebook.ipynb`](./notebook.ipynb) within this repository.
From Argilla we encourage anyone out there to play around, investigate, and experiment with the data, and we firmly believe on open sourcing what we do, as
ourselves, as well as the whole community, benefit a lot from open source and we also want to give back.
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: https://huggingface.co/datasets/openbmb/UltraFeedback
Additionally, you may also want to cite our work with Notus 7B, which lead the curation of the UltraFeedback dataset:
```bibtex
@misc{notus2023,
author = {Alvaro Bartolome and Gabriel Martin and Daniel Vila},
title = {Notus},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/argilla-io/notus}}
}
```
> Alphabetically ordered by last name due to equal contribution. |
Locutusque/hercules-v2.0 | ---
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2323224670
num_examples: 1307174
download_size: 1177302986
dataset_size: 2323224670
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- code
- function calling
- chemistry
- biology
- physics
- math
- medical
- not-for-all-audiences
- synthetic
---
### Dataset Card for Hercules-v2.0

#### Overview
**Dataset Name:** Hercules-v2.0
**Version:** 2.0
**Date of Release:** February 2, 2024
**Size:** 1,307,174
**Data Sources:**
Hercules-v2.0 is an enriched instruction dataset derived from OpenHermes-2.5, aimed at enhancing its diversity and scope. The dataset amalgamates contributions from various data sources, with a strong emphasis on Biology, Physics, Medicine, Math, Computer Science, Instruction Following, Function Calling, and Roleplay. The data sources used to construct Hercules-v2.0 include:
- cognitivecomputations/dolphin (first 200k examples)
- Evol Instruct 70K && 140K
- teknium/GPT4-LLM-Cleaned
- jondurbin/airoboros-3.2
- AlekseyKorshuk/camel-chatml
- CollectiveCognition/chats-data-2023-09-22
- Nebulous/lmsys-chat-1m-smortmodelsonly
- glaiveai/glaive-code-assistant-v2
- glaiveai/glaive-code-assistant
- glaiveai/glaive-function-calling-v2
- garage-bAInd/Open-Platypus
- meta-math/MetaMathQA (first 40k examples)
- teknium/GPTeacher-General-Instruct
- GPTeacher roleplay datasets
- BI55/MedText
- pubmed_qa labeled subset
- Unnatural Instructions
- CollectiveCognition/chats-data-2023-09-27
- CollectiveCognition/chats-data-2023-10-16
This dataset is written with mostly GPT-4, but other models such as Claude-1, Claude-1-instant, Claude-2, Claude-2.1, and GPT-3.5-Turbo can be found in the data.
Curation of this dataset was based on findings from hercules-v1.0.
Warning: This dataset contains toxic examples. Use at your own risk.
#### Description
Hercules-v2.0 is designed to serve as a comprehensive and multifaceted dataset tailored for the development and evaluation of advanced machine learning models, particularly those focused on natural language understanding and processing in specialized domains. It includes a variety of formats, such as question-answering pairs, dialogues, function calls, and roleplay scenarios, providing robust training material for models to handle complex instructions and execute function calls.
#### Data Format
The dataset includes JSON-formatted entries, with a unique structure to incorporate function calling examples. Each entry is composed of a sequence of interactions, each tagged with "from" to indicate the speaker (human, function-call, function-response, or gpt) and "value" to present the content or payload of the interaction. For example:
```json
[
{ "from": "human", "value": "Hi, I need to convert a temperature from Celsius to Fahrenheit. The temperature is 30 degrees Celsius." },
{ "from": "function-call", "value": "{\"name\": \"convert_temperature\", \"arguments\": '{\"temperature\": 30, \"from_unit\": \"Celsius\", \"to_unit\": \"Fahrenheit\"}'}" },
{ "from": "function-response", "value": "{\"converted_temperature\": 86}" },
{ "from": "gpt", "value": "The converted temperature from 30 degrees Celsius to Fahrenheit is 86 degrees Fahrenheit." }
]
```
#### Usage
The Hercules-v2.0 dataset is designed for training and evaluating AI systems in their ability to follow instructions, execute function calls, and interact in roleplay scenarios across various scientific and technical disciplines. Researchers and developers can leverage this dataset for:
- Enhancing language models' understanding of complex topics.
- Improving the accuracy of function-call executions within conversational agents.
- Developing models capable of engaging in educational and informative dialogue.
- Benchmarking systems on their ability to follow intricate instructions and provide accurate responses.
#### Licensing
This dataset is released under the apache-2.0 license.
#### Citation
Researchers using Hercules-v2.0 in their work should cite the dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hercules-v2.0: An Instruction Dataset for Specialized Domains},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
doi = {10.57967/hf/1744}
url = {https://huggingface.co/datasets/Locutusque/hercules-v2.0}
}
```
#### Acknowledgements
Hercules-v2.0 was made possible thanks to the contributions from various datasets and the community's efforts in compiling and refining data to create a rich and diverse instruction set. Special thanks go to the creator of OpenHermes-2.5 and all the data sources listed above.
#### Version History
v2.0: Current version with enhanced diversity and scope.
v1.0: Initial release. |
ovior/twitter_dataset_1713069886 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 2267429
num_examples: 7052
download_size: 1269961
dataset_size: 2267429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
senhorsapo/anya | ---
license: openrail
---
|
ardauzunoglu/tr-wikihow-summ | ---
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 279070558
num_examples: 113356
- name: validation
num_bytes: 15174147
num_examples: 6082
- name: test
num_bytes: 14888006
num_examples: 5984
download_size: 166588788
dataset_size: 309132711
---
# Dataset Card for "tr-wikihow-summ"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/1d21656f | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 186
num_examples: 10
download_size: 1329
dataset_size: 186
---
# Dataset Card for "1d21656f"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/manga_art_style_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 119589
num_examples: 1000
download_size: 15846
dataset_size: 119589
---
# Dataset Card for "manga_art_style_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Locutusque/Hercules-v3.0 | ---
license: other
task_categories:
- text-generation
- question-answering
- conversational
language:
- en
tags:
- not-for-all-audiences
- chemistry
- biology
- code
- medical
- synthetic
---
# Hercules-v3.0

- **Dataset Name:** Hercules-v3.0
- **Version:** 3.0
- **Release Date:** 2024-2-14
- **Number of Examples:** 1,637,895
- **Domains:** Math, Science, Biology, Physics, Instruction Following, Conversation, Computer Science, Roleplay, and more
- **Languages:** Mostly English, but others can be detected.
- **Task Types:** Question Answering, Conversational Modeling, Instruction Following, Code Generation, Roleplay
## Data Source Description
Hercules-v3.0 is an extensive and diverse dataset that combines various domains to create a powerful tool for training artificial intelligence models. The data sources include conversations, coding examples, scientific explanations, and more. The dataset is sourced from multiple high-quality repositories, each contributing to the robustness of Hercules-v3.0 in different knowledge domains.
## Included Data Sources
- `cognitivecomputations/dolphin`
- `Evol Instruct 70K & 140K`
- `teknium/GPT4-LLM-Cleaned`
- `jondurbin/airoboros-3.2`
- `AlekseyKorshuk/camel-chatml`
- `CollectiveCognition/chats-data-2023-09-22`
- `Nebulous/lmsys-chat-1m-smortmodelsonly`
- `glaiveai/glaive-code-assistant-v2`
- `glaiveai/glaive-code-assistant`
- `glaiveai/glaive-function-calling-v2`
- `garage-bAInd/Open-Platypus`
- `meta-math/MetaMathQA`
- `teknium/GPTeacher-General-Instruct`
- `GPTeacher roleplay datasets`
- `BI55/MedText`
- `pubmed_qa labeled subset`
- `Unnatural Instructions`
- `M4-ai/LDJnr_combined_inout_format`
- `CollectiveCognition/chats-data-2023-09-27`
- `CollectiveCognition/chats-data-2023-10-16`
- `NobodyExistsOnTheInternet/sharegptPIPPA`
- `yuekai/openchat_sharegpt_v3_vicuna_format`
- `ise-uiuc/Magicoder-Evol-Instruct-110K`
- `Squish42/bluemoon-fandom-1-1-rp-cleaned`
- `sablo/oasst2_curated`
Note: I would recommend filtering out any bluemoon examples because it seems to cause performance degradation.
## Data Characteristics
The dataset amalgamates text from various domains, including structured and unstructured data. It contains dialogues, instructional texts, scientific explanations, coding tasks, and more.
## Intended Use
Hercules-v3.0 is designed for training and evaluating AI models capable of handling complex tasks across multiple domains. It is suitable for researchers and developers in academia and industry working on advanced conversational agents, instruction-following models, and knowledge-intensive applications.
## Data Quality
The data was collected from reputable sources with an emphasis on diversity and quality. It is expected to be relatively clean but may require additional preprocessing for specific tasks.
## Limitations and Bias
- The dataset may have inherent biases from the original data sources.
- Some domains may be overrepresented due to the nature of the source datasets.
## X-rated Content Disclaimer
Hercules-v3.0 contains X-rated content. Users are solely responsible for the use of the dataset and must ensure that their use complies with all applicable laws and regulations. The dataset maintainers are not responsible for the misuse of the dataset.
## Usage Agreement
By using the Hercules-v3.0 dataset, users agree to the following:
- The dataset is used at the user's own risk.
- The dataset maintainers are not liable for any damages arising from the use of the dataset.
- Users will not hold the dataset maintainers responsible for any claims, liabilities, losses, or expenses.
Please make sure to read the license for more information.
## Citation
```
@misc{sebastian_gabarain_2024,
title = {Hercules-v3.0: The "Golden Ratio" for High Quality Instruction Datasets},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/Hercules-v3.0}
}
``` |
open-llm-leaderboard/details_aboros98__merlin1.4 | ---
pretty_name: Evaluation run of aboros98/merlin1.4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aboros98/merlin1.4](https://huggingface.co/aboros98/merlin1.4) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aboros98__merlin1.4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-15T12:59:35.847418](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1.4/blob/main/results_2024-03-15T12-59-35.847418.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5651825045623053,\n\
\ \"acc_stderr\": 0.03399328820379374,\n \"acc_norm\": 0.5670113817407626,\n\
\ \"acc_norm_stderr\": 0.03469345112171627,\n \"mc1\": 0.32558139534883723,\n\
\ \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.4735785703076855,\n\
\ \"mc2_stderr\": 0.015147123054621936\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5631399317406144,\n \"acc_stderr\": 0.014494421584256519,\n\
\ \"acc_norm\": 0.5930034129692833,\n \"acc_norm_stderr\": 0.014356399418009123\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5630352519418442,\n\
\ \"acc_stderr\": 0.004949969363017659,\n \"acc_norm\": 0.7449711212905795,\n\
\ \"acc_norm_stderr\": 0.004349866376068982\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \
\ \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5723684210526315,\n \"acc_stderr\": 0.04026097083296563,\n\
\ \"acc_norm\": 0.5723684210526315,\n \"acc_norm_stderr\": 0.04026097083296563\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5849056603773585,\n \"acc_stderr\": 0.03032594578928611,\n\
\ \"acc_norm\": 0.5849056603773585,\n \"acc_norm_stderr\": 0.03032594578928611\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6041666666666666,\n\
\ \"acc_stderr\": 0.04089465449325582,\n \"acc_norm\": 0.6041666666666666,\n\
\ \"acc_norm_stderr\": 0.04089465449325582\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\"\
: 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5491329479768786,\n\
\ \"acc_stderr\": 0.03794012674697031,\n \"acc_norm\": 0.5491329479768786,\n\
\ \"acc_norm_stderr\": 0.03794012674697031\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.045338381959297736,\n\
\ \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.045338381959297736\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n\
\ \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.045144961328736334,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.045144961328736334\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41798941798941797,\n \"acc_stderr\": 0.025402555503260912,\n \"\
acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.025402555503260912\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n\
\ \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n\
\ \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6709677419354839,\n\
\ \"acc_stderr\": 0.026729499068349958,\n \"acc_norm\": 0.6709677419354839,\n\
\ \"acc_norm_stderr\": 0.026729499068349958\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n\
\ \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.03756335775187899,\n\
\ \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.03756335775187899\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.702020202020202,\n \"acc_stderr\": 0.03258630383836556,\n \"acc_norm\"\
: 0.702020202020202,\n \"acc_norm_stderr\": 0.03258630383836556\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.7616580310880829,\n \"acc_stderr\": 0.030748905363909895,\n\
\ \"acc_norm\": 0.7616580310880829,\n \"acc_norm_stderr\": 0.030748905363909895\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5461538461538461,\n \"acc_stderr\": 0.025242770987126177,\n\
\ \"acc_norm\": 0.5461538461538461,\n \"acc_norm_stderr\": 0.025242770987126177\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.31851851851851853,\n \"acc_stderr\": 0.028406533090608463,\n \
\ \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.028406533090608463\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.03206183783236152,\n \
\ \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.03206183783236152\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"\
acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7743119266055046,\n \"acc_stderr\": 0.017923087667803067,\n \"\
acc_norm\": 0.7743119266055046,\n \"acc_norm_stderr\": 0.017923087667803067\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4212962962962963,\n \"acc_stderr\": 0.03367462138896078,\n \"\
acc_norm\": 0.4212962962962963,\n \"acc_norm_stderr\": 0.03367462138896078\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6421568627450981,\n \"acc_stderr\": 0.03364487286088299,\n \"\
acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.03364487286088299\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7088607594936709,\n \"acc_stderr\": 0.029571601065753374,\n \
\ \"acc_norm\": 0.7088607594936709,\n \"acc_norm_stderr\": 0.029571601065753374\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6322869955156951,\n\
\ \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.6322869955156951,\n\
\ \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6717557251908397,\n \"acc_stderr\": 0.04118438565806298,\n\
\ \"acc_norm\": 0.6717557251908397,\n \"acc_norm_stderr\": 0.04118438565806298\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.71900826446281,\n \"acc_stderr\": 0.04103203830514511,\n \"acc_norm\"\
: 0.71900826446281,\n \"acc_norm_stderr\": 0.04103203830514511\n },\n\
\ \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.043300437496507437,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.043300437496507437\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6993865030674846,\n \"acc_stderr\": 0.03602511318806771,\n\
\ \"acc_norm\": 0.6993865030674846,\n \"acc_norm_stderr\": 0.03602511318806771\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n\
\ \"acc_stderr\": 0.04726835553719097,\n \"acc_norm\": 0.5446428571428571,\n\
\ \"acc_norm_stderr\": 0.04726835553719097\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.04453254836326468,\n\
\ \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.04453254836326468\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8162393162393162,\n\
\ \"acc_stderr\": 0.025372139671722933,\n \"acc_norm\": 0.8162393162393162,\n\
\ \"acc_norm_stderr\": 0.025372139671722933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \
\ \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6896551724137931,\n\
\ \"acc_stderr\": 0.01654378502604831,\n \"acc_norm\": 0.6896551724137931,\n\
\ \"acc_norm_stderr\": 0.01654378502604831\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.025305258131879716,\n\
\ \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.025305258131879716\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.264804469273743,\n\
\ \"acc_stderr\": 0.01475690648326066,\n \"acc_norm\": 0.264804469273743,\n\
\ \"acc_norm_stderr\": 0.01475690648326066\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6143790849673203,\n \"acc_stderr\": 0.02787074527829027,\n\
\ \"acc_norm\": 0.6143790849673203,\n \"acc_norm_stderr\": 0.02787074527829027\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6334405144694534,\n\
\ \"acc_stderr\": 0.02736807824397163,\n \"acc_norm\": 0.6334405144694534,\n\
\ \"acc_norm_stderr\": 0.02736807824397163\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6049382716049383,\n \"acc_stderr\": 0.027201117666925647,\n\
\ \"acc_norm\": 0.6049382716049383,\n \"acc_norm_stderr\": 0.027201117666925647\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4219858156028369,\n \"acc_stderr\": 0.0294621892333706,\n \
\ \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.0294621892333706\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4048239895697523,\n\
\ \"acc_stderr\": 0.012536743830954,\n \"acc_norm\": 0.4048239895697523,\n\
\ \"acc_norm_stderr\": 0.012536743830954\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.030134614954403924,\n \
\ \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.030134614954403924\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5620915032679739,\n \"acc_stderr\": 0.020071257886886525,\n \
\ \"acc_norm\": 0.5620915032679739,\n \"acc_norm_stderr\": 0.020071257886886525\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6612244897959184,\n \"acc_stderr\": 0.030299506562154185,\n\
\ \"acc_norm\": 0.6612244897959184,\n \"acc_norm_stderr\": 0.030299506562154185\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7661691542288557,\n\
\ \"acc_stderr\": 0.029929415408348384,\n \"acc_norm\": 0.7661691542288557,\n\
\ \"acc_norm_stderr\": 0.029929415408348384\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n\
\ \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n\
\ \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03615507630310936,\n\
\ \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03615507630310936\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32558139534883723,\n\
\ \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.4735785703076855,\n\
\ \"mc2_stderr\": 0.015147123054621936\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.749802683504341,\n \"acc_stderr\": 0.012173009642449144\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4844579226686884,\n \
\ \"acc_stderr\": 0.013765829454512886\n }\n}\n```"
repo_url: https://huggingface.co/aboros98/merlin1.4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|arc:challenge|25_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|gsm8k|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hellaswag|10_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T12-59-35.847418.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-15T12-59-35.847418.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- '**/details_harness|winogrande|5_2024-03-15T12-59-35.847418.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-15T12-59-35.847418.parquet'
- config_name: results
data_files:
- split: 2024_03_15T12_59_35.847418
path:
- results_2024-03-15T12-59-35.847418.parquet
- split: latest
path:
- results_2024-03-15T12-59-35.847418.parquet
---
# Dataset Card for Evaluation run of aboros98/merlin1.4
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [aboros98/merlin1.4](https://huggingface.co/aboros98/merlin1.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aboros98__merlin1.4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-15T12:59:35.847418](https://huggingface.co/datasets/open-llm-leaderboard/details_aboros98__merlin1.4/blob/main/results_2024-03-15T12-59-35.847418.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5651825045623053,
"acc_stderr": 0.03399328820379374,
"acc_norm": 0.5670113817407626,
"acc_norm_stderr": 0.03469345112171627,
"mc1": 0.32558139534883723,
"mc1_stderr": 0.016403989469907825,
"mc2": 0.4735785703076855,
"mc2_stderr": 0.015147123054621936
},
"harness|arc:challenge|25": {
"acc": 0.5631399317406144,
"acc_stderr": 0.014494421584256519,
"acc_norm": 0.5930034129692833,
"acc_norm_stderr": 0.014356399418009123
},
"harness|hellaswag|10": {
"acc": 0.5630352519418442,
"acc_stderr": 0.004949969363017659,
"acc_norm": 0.7449711212905795,
"acc_norm_stderr": 0.004349866376068982
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.04292596718256981,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.04292596718256981
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5723684210526315,
"acc_stderr": 0.04026097083296563,
"acc_norm": 0.5723684210526315,
"acc_norm_stderr": 0.04026097083296563
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.55,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5849056603773585,
"acc_stderr": 0.03032594578928611,
"acc_norm": 0.5849056603773585,
"acc_norm_stderr": 0.03032594578928611
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6041666666666666,
"acc_stderr": 0.04089465449325582,
"acc_norm": 0.6041666666666666,
"acc_norm_stderr": 0.04089465449325582
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5491329479768786,
"acc_stderr": 0.03794012674697031,
"acc_norm": 0.5491329479768786,
"acc_norm_stderr": 0.03794012674697031
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.045338381959297736,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.045338381959297736
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.045144961328736334,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.045144961328736334
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.025402555503260912,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.025402555503260912
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6709677419354839,
"acc_stderr": 0.026729499068349958,
"acc_norm": 0.6709677419354839,
"acc_norm_stderr": 0.026729499068349958
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.03756335775187899,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.03756335775187899
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.702020202020202,
"acc_stderr": 0.03258630383836556,
"acc_norm": 0.702020202020202,
"acc_norm_stderr": 0.03258630383836556
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7616580310880829,
"acc_stderr": 0.030748905363909895,
"acc_norm": 0.7616580310880829,
"acc_norm_stderr": 0.030748905363909895
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5461538461538461,
"acc_stderr": 0.025242770987126177,
"acc_norm": 0.5461538461538461,
"acc_norm_stderr": 0.025242770987126177
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.028406533090608463,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.028406533090608463
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5798319327731093,
"acc_stderr": 0.03206183783236152,
"acc_norm": 0.5798319327731093,
"acc_norm_stderr": 0.03206183783236152
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7743119266055046,
"acc_stderr": 0.017923087667803067,
"acc_norm": 0.7743119266055046,
"acc_norm_stderr": 0.017923087667803067
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6421568627450981,
"acc_stderr": 0.03364487286088299,
"acc_norm": 0.6421568627450981,
"acc_norm_stderr": 0.03364487286088299
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7088607594936709,
"acc_stderr": 0.029571601065753374,
"acc_norm": 0.7088607594936709,
"acc_norm_stderr": 0.029571601065753374
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6322869955156951,
"acc_stderr": 0.03236198350928275,
"acc_norm": 0.6322869955156951,
"acc_norm_stderr": 0.03236198350928275
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6717557251908397,
"acc_stderr": 0.04118438565806298,
"acc_norm": 0.6717557251908397,
"acc_norm_stderr": 0.04118438565806298
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514511,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514511
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.043300437496507437,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.043300437496507437
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6993865030674846,
"acc_stderr": 0.03602511318806771,
"acc_norm": 0.6993865030674846,
"acc_norm_stderr": 0.03602511318806771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5446428571428571,
"acc_stderr": 0.04726835553719097,
"acc_norm": 0.5446428571428571,
"acc_norm_stderr": 0.04726835553719097
},
"harness|hendrycksTest-management|5": {
"acc": 0.7184466019417476,
"acc_stderr": 0.04453254836326468,
"acc_norm": 0.7184466019417476,
"acc_norm_stderr": 0.04453254836326468
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8162393162393162,
"acc_stderr": 0.025372139671722933,
"acc_norm": 0.8162393162393162,
"acc_norm_stderr": 0.025372139671722933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6896551724137931,
"acc_stderr": 0.01654378502604831,
"acc_norm": 0.6896551724137931,
"acc_norm_stderr": 0.01654378502604831
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6705202312138728,
"acc_stderr": 0.025305258131879716,
"acc_norm": 0.6705202312138728,
"acc_norm_stderr": 0.025305258131879716
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.264804469273743,
"acc_stderr": 0.01475690648326066,
"acc_norm": 0.264804469273743,
"acc_norm_stderr": 0.01475690648326066
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6143790849673203,
"acc_stderr": 0.02787074527829027,
"acc_norm": 0.6143790849673203,
"acc_norm_stderr": 0.02787074527829027
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6334405144694534,
"acc_stderr": 0.02736807824397163,
"acc_norm": 0.6334405144694534,
"acc_norm_stderr": 0.02736807824397163
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6049382716049383,
"acc_stderr": 0.027201117666925647,
"acc_norm": 0.6049382716049383,
"acc_norm_stderr": 0.027201117666925647
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.0294621892333706,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.0294621892333706
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4048239895697523,
"acc_stderr": 0.012536743830954,
"acc_norm": 0.4048239895697523,
"acc_norm_stderr": 0.012536743830954
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4375,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5620915032679739,
"acc_stderr": 0.020071257886886525,
"acc_norm": 0.5620915032679739,
"acc_norm_stderr": 0.020071257886886525
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6612244897959184,
"acc_stderr": 0.030299506562154185,
"acc_norm": 0.6612244897959184,
"acc_norm_stderr": 0.030299506562154185
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7661691542288557,
"acc_stderr": 0.029929415408348384,
"acc_norm": 0.7661691542288557,
"acc_norm_stderr": 0.029929415408348384
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4819277108433735,
"acc_stderr": 0.038899512528272166,
"acc_norm": 0.4819277108433735,
"acc_norm_stderr": 0.038899512528272166
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03615507630310936,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03615507630310936
},
"harness|truthfulqa:mc|0": {
"mc1": 0.32558139534883723,
"mc1_stderr": 0.016403989469907825,
"mc2": 0.4735785703076855,
"mc2_stderr": 0.015147123054621936
},
"harness|winogrande|5": {
"acc": 0.749802683504341,
"acc_stderr": 0.012173009642449144
},
"harness|gsm8k|5": {
"acc": 0.4844579226686884,
"acc_stderr": 0.013765829454512886
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
lhoestq/pokemonData | ---
configs:
- config_name: default
data_files: data/Pokemon.csv
license: cc0-1.0
language:
- en
tags:
- pokemon
---
# Dataset card for pokemonData
Dataset from https://github.com/lgreski/pokemonData listing all the Pokemons up to generation 9 and their characteristics:
- ID
- Name
- Form
- Type1
- Type2
- Total
- HP
- Attack
- Defense
- Sp. Atk
- Sp. Def
- Speed
- Generation |
CVasNLPExperiments/FGVC_Aircraft_test_google_flan_t5_xxl_mode_T_SPECIFIC_ns_100 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0_clip_tags_ViT_L_14_Attributes_ViT_L_14_text_davinci_003_full_clip_tags_ViT_L_14_simple_specific_rices
num_bytes: 14364
num_examples: 100
download_size: 3992
dataset_size: 14364
---
# Dataset Card for "FGVC_Aircraft_test_google_flan_t5_xxl_mode_T_SPECIFIC_ns_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
huggingartists/pyrokinesis | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/pyrokinesis"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.7954 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e701c222dfb8725065dd99c8a43988da.1000x1000x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/pyrokinesis">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">pyrokinesis</div>
<a href="https://genius.com/artists/pyrokinesis">
<div style="text-align: center; font-size: 14px;">@pyrokinesis</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/pyrokinesis).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/pyrokinesis")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|202| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/pyrokinesis")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
tyzhu/lmind_nq_train10000_eval6489_v1_doc | ---
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 1159729
num_examples: 10000
- name: train_recite_qa
num_bytes: 7573876
num_examples: 10000
- name: eval_qa
num_bytes: 752802
num_examples: 6489
- name: eval_recite_qa
num_bytes: 4912675
num_examples: 6489
- name: all_docs
num_bytes: 9144930
num_examples: 14014
- name: all_docs_eval
num_bytes: 9144126
num_examples: 14014
- name: train
num_bytes: 9144930
num_examples: 14014
- name: validation
num_bytes: 9144930
num_examples: 14014
download_size: 31863130
dataset_size: 50977998
---
# Dataset Card for "lmind_nq_train10000_eval6489_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367158 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: futin/guess
dataset_config: vi_3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
text-machine-lab/ROLE-1500 | ---
license: mit
---
ROLE-1500 is the extended version of ROLE-88. The dataset is extended using GPT-3.
If this dataset is useful to you please cite our work
@article{shivagunde2023larger,
title={Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning},
author={Shivagunde, Namrata and Lialin, Vladislav and Rumshisky, Anna},
journal={arXiv preprint arXiv:2303.16445},
year={2023}
} |
suryo/semabrang | ---
license: bigscience-openrail-m
---
|
AdapterOcean/physics_dataset_standardized_cluster_4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: embedding
sequence: float64
- name: cluster
dtype: int64
splits:
- name: train
num_bytes: 35449571
num_examples: 3438
download_size: 0
dataset_size: 35449571
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_cluster_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bryanchrist/annotations | ---
license: gpl-3.0
---
## MATHWELL Human Annotation Dataset
The MATHWELL Human Annotation Dataset contains 4,734 synthetic word problems and answers generated by [MATHWELL](https://huggingface.co/bryanchrist/MATHWELL), a context-free grade school math word problem generator released in [MATHWELL: Generating Educational Math Word Problems at Scale](https://arxiv.org/abs/2402.15861), and comparison models (GPT-4, GPT-3.5, Llama-2, MAmmoTH, and LLEMMA) with expert human annotations for solvability, accuracy, appropriateness, and meets all criteria (MaC). Solvability means the problem is mathematically possible to solve, accuracy means the Program of Thought (PoT) solution arrives at the correct answer, appropriateness means that the mathematical topic is familiar to a grade school student and the question's context is appropriate for a young learner, and MaC denotes questions which are labeled as solvable, accurate, and appropriate. Null values for accuracy and appropriateness indicate a question labeled as unsolvable, which means it cannot have an accurate solution and is automatically inappropriate. Based on our annotations, 82.2% of the question/answer pairs are solvable, 87.2% have accurate solutions, 68.6% are appropriate, and 58.8% meet all criteria.
This dataset is designed to train text classifiers to automatically label word problem generator outputs for solvability, accuracy, and appropriateness. More details about the dataset can be found in our [paper](https://arxiv.org/abs/2402.15861).
## Citation
```bash
@misc{christ2024mathwell,
title={MATHWELL: Generating Educational Math Word Problems at Scale},
author={Bryan R Christ and Jonathan Kropko and Thomas Hartvigsen},
year={2024},
eprint={2402.15861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
spacerini/gpt2-outputs | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: length
dtype: int64
- name: ended
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 6865376016
num_examples: 2340000
download_size: 4387185259
dataset_size: 6865376016
---
# Dataset Card for "gpt2-outputs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/OK-VQA_modif-Sample | Invalid username or password. |
dsafdfsge/gfdgdgd | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2 | ---
pretty_name: Evaluation run of lvkaokao/mistral-7b-finetuned-orca-dpo-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lvkaokao/mistral-7b-finetuned-orca-dpo-v2](https://huggingface.co/lvkaokao/mistral-7b-finetuned-orca-dpo-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-14T06:32:58.460439](https://huggingface.co/datasets/open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2_public/blob/main/results_2023-11-14T06-32-58.460439.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6198496581816488,\n\
\ \"acc_stderr\": 0.03259259478405919,\n \"acc_norm\": 0.627996598760343,\n\
\ \"acc_norm_stderr\": 0.03329289442488,\n \"mc1\": 0.44430844553243576,\n\
\ \"mc1_stderr\": 0.01739458625074317,\n \"mc2\": 0.596468573226102,\n\
\ \"mc2_stderr\": 0.015337888566380171,\n \"em\": 0.31512164429530204,\n\
\ \"em_stderr\": 0.004757573308442557,\n \"f1\": 0.43838401845637875,\n\
\ \"f1_stderr\": 0.004511299753314001\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6356655290102389,\n \"acc_stderr\": 0.014063260279882415,\n\
\ \"acc_norm\": 0.6621160409556314,\n \"acc_norm_stderr\": 0.013822047922283507\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6448914558852819,\n\
\ \"acc_stderr\": 0.004775681871529863,\n \"acc_norm\": 0.836387173869747,\n\
\ \"acc_norm_stderr\": 0.003691678495767969\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n\
\ \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n\
\ \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n\
\ \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \
\ \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.02914690474779834,\n\
\ \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.02914690474779834\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n\
\ \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n\
\ \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n\
\ \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n\
\ \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n\
\ \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n\
\ \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.046854730419077895,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.046854730419077895\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n\
\ \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.37566137566137564,\n \"acc_stderr\": 0.024942368931159788,\n \"\
acc_norm\": 0.37566137566137564,\n \"acc_norm_stderr\": 0.024942368931159788\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n\
\ \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n\
\ \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7677419354838709,\n \"acc_stderr\": 0.024022256130308235,\n \"\
acc_norm\": 0.7677419354838709,\n \"acc_norm_stderr\": 0.024022256130308235\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n\
\ \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"\
acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6076923076923076,\n \"acc_stderr\": 0.024756000382130952,\n\
\ \"acc_norm\": 0.6076923076923076,\n \"acc_norm_stderr\": 0.024756000382130952\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34444444444444444,\n \"acc_stderr\": 0.02897264888484427,\n \
\ \"acc_norm\": 0.34444444444444444,\n \"acc_norm_stderr\": 0.02897264888484427\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"\
acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7990196078431373,\n \"acc_stderr\": 0.02812597226565438,\n \"\
acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.02812597226565438\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \
\ \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n\
\ \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990947,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990947\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n\
\ \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n\
\ \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n\
\ \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n\
\ \"acc_stderr\": 0.022801382534597528,\n \"acc_norm\": 0.8589743589743589,\n\
\ \"acc_norm_stderr\": 0.022801382534597528\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n\
\ \"acc_stderr\": 0.013964393769899143,\n \"acc_norm\": 0.8122605363984674,\n\
\ \"acc_norm_stderr\": 0.013964393769899143\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.025190181327608408,\n\
\ \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.025190181327608408\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3865921787709497,\n\
\ \"acc_stderr\": 0.016286674879101022,\n \"acc_norm\": 0.3865921787709497,\n\
\ \"acc_norm_stderr\": 0.016286674879101022\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6928104575163399,\n \"acc_stderr\": 0.026415601914388995,\n\
\ \"acc_norm\": 0.6928104575163399,\n \"acc_norm_stderr\": 0.026415601914388995\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n\
\ \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n\
\ \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.024922001168886324,\n\
\ \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.024922001168886324\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.43617021276595747,\n \"acc_stderr\": 0.029583452036284066,\n \
\ \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.029583452036284066\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44784876140808344,\n\
\ \"acc_stderr\": 0.012700582404768223,\n \"acc_norm\": 0.44784876140808344,\n\
\ \"acc_norm_stderr\": 0.012700582404768223\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6580882352941176,\n \"acc_stderr\": 0.028814722422254187,\n\
\ \"acc_norm\": 0.6580882352941176,\n \"acc_norm_stderr\": 0.028814722422254187\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6405228758169934,\n \"acc_stderr\": 0.01941253924203216,\n \
\ \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.01941253924203216\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n\
\ \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454142,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454142\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.44430844553243576,\n\
\ \"mc1_stderr\": 0.01739458625074317,\n \"mc2\": 0.596468573226102,\n\
\ \"mc2_stderr\": 0.015337888566380171\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773236\n\
\ },\n \"harness|drop|3\": {\n \"em\": 0.31512164429530204,\n \
\ \"em_stderr\": 0.004757573308442557,\n \"f1\": 0.43838401845637875,\n\
\ \"f1_stderr\": 0.004511299753314001\n },\n \"harness|gsm8k|5\": {\n\
\ \"acc\": 0.1956027293404094,\n \"acc_stderr\": 0.010926096810556464\n\
\ }\n}\n```"
repo_url: https://huggingface.co/lvkaokao/mistral-7b-finetuned-orca-dpo-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|arc:challenge|25_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|drop|3_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|gsm8k|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hellaswag|10_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-11-14T06-32-58.460439.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-11-14T06-32-58.460439.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- '**/details_harness|winogrande|5_2023-11-14T06-32-58.460439.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-14T06-32-58.460439.parquet'
- config_name: results
data_files:
- split: 2023_11_14T06_32_58.460439
path:
- results_2023-11-14T06-32-58.460439.parquet
- split: latest
path:
- results_2023-11-14T06-32-58.460439.parquet
---
# Dataset Card for Evaluation run of lvkaokao/mistral-7b-finetuned-orca-dpo-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lvkaokao/mistral-7b-finetuned-orca-dpo-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lvkaokao/mistral-7b-finetuned-orca-dpo-v2](https://huggingface.co/lvkaokao/mistral-7b-finetuned-orca-dpo-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-14T06:32:58.460439](https://huggingface.co/datasets/open-llm-leaderboard/details_lvkaokao__mistral-7b-finetuned-orca-dpo-v2_public/blob/main/results_2023-11-14T06-32-58.460439.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6198496581816488,
"acc_stderr": 0.03259259478405919,
"acc_norm": 0.627996598760343,
"acc_norm_stderr": 0.03329289442488,
"mc1": 0.44430844553243576,
"mc1_stderr": 0.01739458625074317,
"mc2": 0.596468573226102,
"mc2_stderr": 0.015337888566380171,
"em": 0.31512164429530204,
"em_stderr": 0.004757573308442557,
"f1": 0.43838401845637875,
"f1_stderr": 0.004511299753314001
},
"harness|arc:challenge|25": {
"acc": 0.6356655290102389,
"acc_stderr": 0.014063260279882415,
"acc_norm": 0.6621160409556314,
"acc_norm_stderr": 0.013822047922283507
},
"harness|hellaswag|10": {
"acc": 0.6448914558852819,
"acc_stderr": 0.004775681871529863,
"acc_norm": 0.836387173869747,
"acc_norm_stderr": 0.003691678495767969
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595853,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595853
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.03823428969926605,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.03823428969926605
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.02914690474779834,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.02914690474779834
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.032650194750335815,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.032650194750335815
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.046854730419077895,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.046854730419077895
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37566137566137564,
"acc_stderr": 0.024942368931159788,
"acc_norm": 0.37566137566137564,
"acc_norm_stderr": 0.024942368931159788
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7677419354838709,
"acc_stderr": 0.024022256130308235,
"acc_norm": 0.7677419354838709,
"acc_norm_stderr": 0.024022256130308235
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198896,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198896
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6076923076923076,
"acc_stderr": 0.024756000382130952,
"acc_norm": 0.6076923076923076,
"acc_norm_stderr": 0.024756000382130952
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34444444444444444,
"acc_stderr": 0.02897264888484427,
"acc_norm": 0.34444444444444444,
"acc_norm_stderr": 0.02897264888484427
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.02812597226565438,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.02812597226565438
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7890295358649789,
"acc_stderr": 0.02655837250266192,
"acc_norm": 0.7890295358649789,
"acc_norm_stderr": 0.02655837250266192
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990947,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990947
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597528,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597528
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.013964393769899143,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.013964393769899143
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.025190181327608408,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.025190181327608408
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3865921787709497,
"acc_stderr": 0.016286674879101022,
"acc_norm": 0.3865921787709497,
"acc_norm_stderr": 0.016286674879101022
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6928104575163399,
"acc_stderr": 0.026415601914388995,
"acc_norm": 0.6928104575163399,
"acc_norm_stderr": 0.026415601914388995
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6752411575562701,
"acc_stderr": 0.026596782287697043,
"acc_norm": 0.6752411575562701,
"acc_norm_stderr": 0.026596782287697043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.024922001168886324,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.024922001168886324
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.43617021276595747,
"acc_stderr": 0.029583452036284066,
"acc_norm": 0.43617021276595747,
"acc_norm_stderr": 0.029583452036284066
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44784876140808344,
"acc_stderr": 0.012700582404768223,
"acc_norm": 0.44784876140808344,
"acc_norm_stderr": 0.012700582404768223
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6580882352941176,
"acc_stderr": 0.028814722422254187,
"acc_norm": 0.6580882352941176,
"acc_norm_stderr": 0.028814722422254187
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.01941253924203216,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.01941253924203216
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.029043088683304328,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.029043088683304328
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454142,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454142
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.44430844553243576,
"mc1_stderr": 0.01739458625074317,
"mc2": 0.596468573226102,
"mc2_stderr": 0.015337888566380171
},
"harness|winogrande|5": {
"acc": 0.7813733228097869,
"acc_stderr": 0.011616198215773236
},
"harness|drop|3": {
"em": 0.31512164429530204,
"em_stderr": 0.004757573308442557,
"f1": 0.43838401845637875,
"f1_stderr": 0.004511299753314001
},
"harness|gsm8k|5": {
"acc": 0.1956027293404094,
"acc_stderr": 0.010926096810556464
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
sreehari267/test | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- not-for-all-audiences
size_categories:
- 100B<n<1T
--- |
GEM/FairytaleQA | ---
annotations_creators:
- expert-created
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: FairytaleQA
tags:
- question-generation
---
# Dataset Card for GEM/FairytaleQA
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/uci-soe/FairytaleQAData
- **Paper:** https://arxiv.org/abs/2203.13947
- **Leaderboard:** https://paperswithcode.com/sota/question-generation-on-fairytaleqa
- **Point of Contact:** Ying Xu, Dakuo Wang
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/FairytaleQA).
### Dataset Summary
The FairytaleQA Dataset is an English-language dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. The Dataset was corrected to support both the tasks of Question Generation and Question Answering.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/FairytaleQA')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/FairytaleQA).
#### paper
[ArXiv](https://arxiv.org/abs/2203.13947)
#### authors
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
## Dataset Overview
### Where to find the Data and its Documentation
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/uci-soe/FairytaleQAData)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2203.13947)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{xu2022fairytaleqa,
author={Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran Bao and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark},
title = {Fantastic Questions and Where to Find Them: Fairytale{QA} -- An Authentic Dataset for Narrative Comprehension},
publisher = {Association for Computational Linguistics},
year = {2022}
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Ying Xu, Dakuo Wang
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
ying.xu@uci.edu, dakuo.wang@ibm.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[PapersWithCode](https://paperswithcode.com/sota/question-generation-on-fairytaleqa)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The task was to generate questions corresponding to the given answers and the story context. Success on the Question Generation task is typically measured by achieving a high ROUGE-L score to the reference ground-truth question.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
[N/A]
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
[N/A]
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
unknown: License information unavailable
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain. The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
This dataset is suitable for developing models to automatically generate questions and QA-Pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Question Generation
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
The task was to generate questions corresponding to the given answers and the story context. Models trained for this task can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
University of California Irvine
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Schmidt Futures
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Dakuo Wang (IBM Research); Bingsheng Yao (Rensselaer Polytechnic Institute); Ying Xu (University of California Irvine)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `story_name`: a string of the story name to which the story section content belongs. Full story data can be found [here](https://github.com/uci-soe/FairytaleQAData).
- `content`: a string of the story section(s) content related to the experts' labeled QA-pair. Used as the input for both Question Generation and Question Answering tasks.
- `question`: a string of the question content. Used as the input for Question Answering task and as the output for Question Generation task.
- `answer`: a string of the answer content for all splits. Used as the input for Question Generation task and as the output for Question Answering task.
- `gem_id`: a string of id follows GEM naming convention ```GEM-${DATASET_NAME}-${SPLIT-NAME}-${id}``` where id is an incrementing number starting at 1
- `target`: a string of the question content being used for training
- `references`: a list of string containing the question content being used for automatic eval
- `local_or_sum`: a string of either local or summary, indicating whether the QA is related to one story section or multiple sections
- `attribute`: a string of one of character, causal relationship, action, setting, feeling, prediction, or outcome resolution. Classification of the QA by education experts annotators via 7 narrative elements on an established framework
- `ex_or_im`: a string of either explicit or implicit, indicating whether the answers can be directly found in the story content or cannot be directly from the story content.
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
[N/A]
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
A typical data point comprises a question, the corresponding story content, and one answer. Education expert annotators labeled whether the answer is locally relevant to one story section or requires summarization capabilities from multiple story sections, and whether the answers are explicit (can be directly found in the stories) or implicit (cannot be directly found in the story text). Additionally, education expert annotators categorize the QA-pairs via 7 narrative elements from an establish framework.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{'story_name': 'self-did-it',
'content': '" what is your name ? " asked the girl from underground . " self is my name , " said the woman . that seemed a curious name to the girl , and she once more began to pull the fire apart . then the woman grew angry and began to scold , and built it all up again . thus they went on for a good while ; but at last , while they were in the midst of their pulling apart and building up of the fire , the woman upset the tar - barrel on the girl from underground . then the latter screamed and ran away , crying : " father , father ! self burned me ! " " nonsense , if self did it , then self must suffer for it ! " came the answer from below the hill .',
'answer': 'the woman told the girl her name was self .',
'question': "why did the girl's father think the girl burned herself ?",
'gem_id': 'GEM-FairytaleQA-test-1006',
'target': "why did the girl's father think the girl burned herself ?",
'references': ["why did the girl's father think the girl burned herself ?"],
'local_or_sum': 'local',
'attribute': 'causal relationship',
'ex_or_im': 'implicit'}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into a train, validation, and test split randomly. The final split sizes are as follows:
| | Train | Validation | Test |
| ----- | ----- | ----- | ----- |
| # Books | 232 | 23 | 23 |
| # QA-Pairs | 8548 | 1025 |1007 |
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The books are randomly split into train/validation/test splits. We control the ratio of QA-pair numbers in train:validation:test splits close to 8:1:1
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
[N/A]
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
This dataset is suitable for developing models to automatically generate questions or QA-pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`data points removed`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
The original data contains two answers by different annotators in validation/test splits, we removed the 2nd answer for GEM version because it is not being used for the Question Generation task.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
[N/A]
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
We are able to measure model's capabilities of generating various types of questions that corresponds to different narrative elements with the FairytaleQA dataset on the Question Generation Task
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The task was to generate questions corresponding to the given answers and the story context. Success on this task is typically measured by achieving a high [ROUGE](https://huggingface.co/metrics/rouge) score to the reference ground-truth questions.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
A [BART-based model](https://huggingface.co/facebook/bart-large) currently achieves a [ROUGE-L of 0.527/0.527](https://github.com/uci-soe/FairytaleQAData) on valid/test splits, which is reported as the baseline experiment for the dataset [paper](https://arxiv.org/pdf/2203.13947.pdf).
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
FairytaleQA was built to focus on comprehension of narratives in the education domain, targeting students from kindergarten to eighth grade. We focus on narrative comprehension for 1. it is a high-level comprehension skill strongly predictive of reading achievement and plays a central role in daily life as people frequently encounter narratives in different forms, 2. narrative stories have a clear structure of specific elements and relations among these elements, and there are existing validated narrative comprehension frameworks around this structure, which provides a basis for developing the annotation schema for our dataset.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The fairytale story texts are from the [Project Gutenberg](https://www.gutenberg.org/) website
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
We gathered the text from the Project Gutenberg website, using “fairytale” as the search term.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by data curator
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
Due to a large number of fairytales found, we used the most popular stories based on the number of downloads since these stories are presumably of higher quality. To ensure the readability of the text, we made a small number of minor revisions to some obviously outdated vocabulary (e.g., changing “ere” to “before”) and the unconventional use of punctuation (e.g., changing consecutive semi-colons to periods).
These texts were broken down into small sections based on their semantic content by our annotators. The annotators were instructed to split the story into sections of 100-300 words that also contain meaningful content and are separated at natural story breaks. An initial annotator would split the story, and this would be reviewed by a cross-checking annotator. Most of the resulting sections were one natural paragraph of the original text.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
manually
#### Filter Criteria
<!-- info: What were the selection criteria? -->
<!-- scope: microscope -->
For each story, we evaluated the reading difficulty level using the [textstat](https://pypi.org/project/textstat/) Python package, primarily based on sentence length, word length, and commonness of words. We excluded stories that are at 10th grade level or above.
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
2<n<10
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
All of these annotators have a B.A. degree in education, psychology, or cognitive science and have substantial experience in teaching and reading assessment. These annotators were supervised by three experts in literacy education.
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
2
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
3
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The dataset annotation distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
The annotators were instructed to imagine that they were creating questions to test elementary or middle school students in the process of reading a complete story. We required the annotators to generate only natural, open-ended questions, avoiding “yes-” or “no-” questions. We also instructed them to provide a diverse set of questions about 7 different narrative elements, and with both implicit and explicit questions.
We asked the annotators to also generate answers for each of their questions. We asked them to provide the shortest possible answers but did not restrict them to complete sentences or short phrases. We also asked the annotators to label which section(s) the question and answer was from.
All annotators received a two-week training in which each of them was familiarized with the coding template and conducted practice coding on the same five stories. The practice QA pairs were then reviewed by the other annotators and the three experts, and discrepancies among annotators were discussed. During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
For the 46 stories used as the evaluation set, we annotate a second reference answer by asking an annotator to independently read the story and answer the questions generated by others.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
Aside from Question Generation task, the data creators and curators used this data for Question Answering, and QA-Pair Generation tasks, and to identify social stereotypes represented in story narratives.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The story content is from publically available knowledge website and the annotated QA-pairs are about general knowledge to the story content without references to the author or to any persons
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
yes
#### Maintenance Plan Details
<!-- info: Describe the original dataset's maintenance plan. -->
<!-- scope: microscope -->
We plan to host various splits for the FairytaleQA dataset to better serve various types of research interests. We have the original data for 2 different split approaches including train/validation/test splits and split by fairytale origins. We are also plan to host the dataset on multiple platforms for various tasks.
#### Maintainer Contact Information
<!-- info: Provide contact information of a person responsible for the dataset maintenance -->
<!-- scope: periscope -->
Daniel Ritchie
#### Any Contestation Mechanism?
<!-- info: Does the maintenance plan include a contestation mechanism allowing individuals to request removal fo content? -->
<!-- scope: periscope -->
no mechanism
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - models trained on this dataset
#### Social Impact Observations
<!-- info: Did any of these previous uses result in observations about the social impact of the systems? In particular, has there been work outlining the risks and limitations of the system? Provide links and descriptions here. -->
<!-- scope: microscope -->
[N/A]
#### Changes as Consequence of Social Impact
<!-- info: Have any changes been made to the dataset as a result of these observations? -->
<!-- scope: periscope -->
[N/A]
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
From the educational perspective, given that reading comprehension is a multicomponent skill, it is ideal for comprehension questions to be able to identify students’ performance in specific sub-skills, thus allowing teachers to provide tailored guidance.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
[N/A]
## Considerations for Using the Data
### PII Risks and Liability
#### Potential PII Risk
<!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
<!-- scope: microscope -->
[N/A]
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`research use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
We noticed that human results are obtained via cross-estimation between the two annotated answers, thus are underestimated. One possibility for future work is to conduct a large-scale human annotation to collect more answers per question and then leverage the massively annotated answers to better establish a human performance evaluation.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The QA-pairs annotated by education experts are targeting the audience of children from kindergarten to eighth grade, so the difficulty of QA-pairs are not suitable to compare with other existing dataset that are sourced from knowledge graphs or knowledge bases like Wikipedia.
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
[N/A]
|
lighteval/MetadataTable3 | ---
dataset_info:
features:
- name: name
dtype: string
- name: hf_repo
dtype: string
- name: hf_subset
dtype: string
- name: hf_avail_splits
sequence: string
- name: evaluation_splits
sequence: string
- name: generation_size
dtype: int64
- name: metric
sequence: string
- name: suite
sequence: string
- name: prompt_function
dtype: string
splits:
- name: train
num_bytes: 106419
num_examples: 598
download_size: 22843
dataset_size: 106419
---
# Dataset Card for "MetadataTable3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/soda_nikke | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of soda/ソーダ/索达/소다 (Nikke: Goddess of Victory)
This is the dataset of soda/ソーダ/索达/소다 (Nikke: Goddess of Victory), containing 160 images and their tags.
The core tags of this character are `long_hair, breasts, maid_headdress, bangs, purple_eyes, green_hair, hair_bun, double_bun, large_breasts, huge_breasts, mole_on_breast, mole, very_long_hair, hair_ornament, pink_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 160 | 287.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soda_nikke/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 160 | 136.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soda_nikke/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 411 | 319.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soda_nikke/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 160 | 243.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soda_nikke/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 411 | 506.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/soda_nikke/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/soda_nikke',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 |  |  |  |  |  | looking_at_viewer, maid, 1girl, cleavage, solo, blush, dress, open_mouth, apron, frills, white_gloves, smile, simple_background, ascot, heart, detached_collar, holding, white_background, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | maid | 1girl | cleavage | solo | blush | dress | open_mouth | apron | frills | white_gloves | smile | simple_background | ascot | heart | detached_collar | holding | white_background | white_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:-------|:--------|:-----------|:-------|:--------|:--------|:-------------|:--------|:---------|:---------------|:--------|:--------------------|:--------|:--------|:------------------|:----------|:-------------------|:-------------------|
| 0 | 20 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
DEplain/DEplain-web-doc | ---
annotations_creators:
- no-annotation
language:
- de
language_creators:
- expert-generated
license:
- other
multilinguality:
- translation
- monolingual
pretty_name: DEplain-web-doc
size_categories:
- <1K
source_datasets:
- original
tags:
- web-text
- plain language
- easy-to-read language
- document simplification
task_categories:
- text2text-generation
task_ids:
- text-simplification
---
# DEplain-web-doc: A corpus for German Document Simplification
DEplain-web-doc is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for document simplification.
The corpus consists of 396 (199/50/147) parallel documents crawled from the web in standard German and plain German (or easy-to-read German). All documents are either published under an open license or the copyright holders gave us the permission to share the data.
If you are interested in a larger corpus, please check our paper and the provided web crawler to download more parallel documents with a closed license.
Human annotators also sentence-wise aligned the 147 documents of the test set to build a corpus for sentence simplification.
For the sentence-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
The documents of the training and development set were automatically aligned using MASSalign.
You can find this data here: [https://github.com/rstodden/DEPlain/](https://github.com/rstodden/DEPlain/tree/main/E__Sentence-level_Corpus/DEplain-web-sent/auto/open).
If you use the automatically aligned data, please use it cautiously, as the alignment quality might be error-prone.
# Dataset Card for DEplain-web-doc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain)
- **Paper:** ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939)
- **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
### Dataset Summary
[DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned.
This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/).
### Supported Tasks and Leaderboards
The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
### Languages
The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache).
### Domains
The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts.
## Dataset Structure
### Data Access
- The dataset is licensed with different open licenses dependent on the subcorpora.
### Data Instances
- `document-simplification` configuration: an instance consists of an original document and one reference simplification.
- `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification. Please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent).
- `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same. Please see [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain/tree/main/C__Alignment_Algorithms)
### Data Fields
| data field | data field description |
|-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
| `original` | an original text from the source dataset |
| `simplification` | a simplified text from the source dataset |
| `pair_id` | document pair id |
| `complex_document_id ` (on doc-level) | id of complex document (-1) |
| `simple_document_id ` (on doc-level) | id of simple document (-0) |
| `original_id ` (on sent-level) | id of sentence(s) of the original text |
| `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
| `domain ` | text domain of the document pair |
| `corpus ` | subcorpus name |
| `simple_url ` | origin URL of the simplified document |
| `complex_url ` | origin URL of the simplified document |
| `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
| `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
| `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
| `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
| `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
| `alignment_location ` | location on hard disk where the alignment is stored |
| `simple_author ` | author (or copyright owner) of the simplified document |
| `complex_author ` | author (or copyright owner) of the original document |
| `simple_title ` | title of the simplified document |
| `complex_title ` | title of the original document |
| `license ` | license of the data |
| `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
| `rater` | id of the rater who annotated the sentence pair |
| `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
### Data Splits
DEplain-web contains a training set, a development set and a test set.
The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)).
Document-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 147 | 147 |
| DEplain-web-auto-open | 199 | 50 | - | 279 |
| DEplain-web-auto-closed | 288 | 72 | - | 360 |
| in total | 487 | 122 | 147 | 756 |
Sentence-level:
| | Train | Dev | Test | Total |
|-------------------------|-------|-----|------|-------|
| DEplain-web-manual-open | - | - | 1846 | 1846 |
| DEplain-web-auto-open | 514 | 138 | - | 652 |
| DEplain-web-auto-closed | 767 | 175 | - | 942 |
| in total | 1281 | 313 | 1846 | |
| **subcorpus** | **simple** | **complex** | **domain** | **description** | **\ doc.** |
|----------------------------------|------------------|------------------|------------------|-------------------------------------------------------------------------------|------------------|
| **EinfacheBücher** | Plain German | Standard German / Old German | fiction | Books in plain German | 15 |
| **EinfacheBücherPassanten** | Plain German | Standard German / Old German | fiction | Books in plain German | 4 |
| **ApothekenUmschau** | Plain German | Standard German | health | Health magazine in which diseases are explained in plain German | 71 |
| **BZFE** | Plain German | Standard German | health | Information of the German Federal Agency for Food on good nutrition | 18 |
| **Alumniportal** | Plain German | Plain German | language learner | Texts related to Germany and German traditions written for language learners. | 137 |
| **Lebenshilfe** | Easy-to-read German | Standard German | accessibility | | 49 |
| **Bibel** | Easy-to-read German | Standard German | bible | Bible texts in easy-to-read German | 221 |
| **NDR-Märchen** | Easy-to-read German | Standard German / Old German | fiction | Fairytales in easy-to-read German | 10 |
| **EinfachTeilhaben** | Easy-to-read German | Standard German | accessibility | | 67 |
| **StadtHamburg** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Hamburg | 79 |
| **StadtKöln** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Cologne | 85 |
: Documents per Domain in DEplain-web.
| domain | avg. | std. | interpretation | \ sents | \ docs |
|------------------|---------------|---------------|-------------------------|-------------------|------------------|
| bible | 0.7011 | 0.31 | moderate | 6903 | 3 |
| fiction | 0.6131 | 0.39 | moderate | 23289 | 3 |
| health | 0.5147 | 0.28 | weak | 13736 | 6 |
| language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 |
| all | 0.8505 | 0.23 | strong | 87645 | 87 |
: Inter-Annotator-Agreement per Domain in DEplain-web-manual.
| operation | documents | percentage |
|-----------|-------------|------------|
| rehphrase | 863 | 11.73 |
| deletion | 3050 | 41.47 |
| addition | 1572 | 21.37 |
| identical | 887 | 12.06 |
| fusion | 110 | 1.5 |
| merge | 77 | 1.05 |
| split | 796 | 10.82 |
| in total | 7355 | 100 |
: Information regarding Simplification Operations in DEplain-web-manual.
## Dataset Creation
### Curation Rationale
Current German text simplification datasets are limited in their size or are only automatically evaluated.
We provide a manually aligned corpus to boost text simplification research in German.
### Source Data
#### Initial Data Collection and Normalization
The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification).
The texts of the documents were manually simplified by professional translators.
The data was split into sentences using a German model of SpaCy.
Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/).
#### Who are the source language producers?
The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939).
### Annotations
#### Annotation process
The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
#### Who are the annotators?
The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
They are not part of any target group of text simplification.
### Personal and Sensitive Information
No sensitive data.
## Considerations for Using the Data
### Social Impact of Dataset
Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
### Discussion of Biases
no bias is known.
### Other Known Limitations
The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information.
## Additional Information
### Dataset Curators
DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''.
### Licensing Information
The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, and CC-BY-NC-ND-4. The corpus also include a "save_use_share" license, for these documents the data provider permitted us to share the data for research purposes.
### Citation Information
```
@inproceedings{stodden-etal-2023-deplain,
title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
author = "Stodden, Regina and
Momen, Omar and
Kallmeyer, Laura",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
notes = "preprint: https://arxiv.org/abs/2305.18939",
}
```
This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite). |
bordman/mih-new | ---
license: mit
---
|
jxu9001/cs6301project180k | ---
dataset_info:
features:
- name: image
dtype: image
- name: expression
dtype: string
- name: img_width
dtype: int64
- name: img_height
dtype: int64
- name: x
dtype: float64
- name: y
dtype: float64
- name: w
dtype: float64
- name: h
dtype: float64
splits:
- name: train
num_bytes: 25184743853.082
num_examples: 143618
- name: test
num_bytes: 6213758550.984
num_examples: 35904
download_size: 8212978067
dataset_size: 31398502404.066
---
# Dataset Card for "cs6301project180k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ebrigham/NL_insurance_reviews_sentiment | ---
task_categories:
- text-classification
language:
- nl
size_categories:
- 1K<n<10K
---
The original dataset is in French (https://www.kaggle.com/datasets/fedi1996/insurance-reviews-france)
---
Dataset was translated to the Dutch language using the Google translate python library googletrans==3.1.0a0
---
The sentiment labels are 1 (POS) and -1 (NEG)
--- |
AlekseyKorshuk/secret-dataset | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reward
dtype: float64
splits:
- name: train
num_bytes: 8645384214
num_examples: 4470687
download_size: 5157411886
dataset_size: 8645384214
---
# Dataset Card for "secret-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vinay573/aarkoocustomdataset | ---
dataset_info:
features:
- name: id
dtype: int64
- name: instructions
dtype: 'null'
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1004
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
el2e10/aya-paraphrase-hindi | ---
language:
- hi
license: cc
size_categories:
- n<1K
source_datasets:
- extended|ai4bharat/IndicXParaphrase
task_categories:
- text-generation
pretty_name: Aya Paraphrase Hindi
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: template_lang
dtype: string
- name: template_id
dtype: int64
splits:
- name: train
num_bytes: 644888
num_examples: 1001
download_size: 231804
dataset_size: 644888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
### Description
This dataset is derived from the already existing dataset made by AI4Bharat. We have used the [IndicXParaphrase](https://huggingface.co/datasets/ai4bharat/IndicXParaphrase) dataset of AI4Bharat to create this instruction style dataset.
We have used the malayalam split of the above mentioned dataset to create this one. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI.
IndicXParaphrase is multilingual, and n-way parallel dataset for paraphrase detection in 10 Indic languages. The original dataset(IndicXParaphrase) was made available under the cc-0 license.
### Template
The following templates(Hindi) where used for converting the original dataset:
```
#Template 1
prompt:
दुसरे शब्दों का प्रयोग करके इस वाक्य को लिखिए: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 2
prompt:
इस वाक्य को अन्य तरीके से फिर से लिखिए: "{original_sentence}"
completion:
{paraphrased_sentence}
```
```
#Template 3
prompt:
निम्नलिखित वाक्य का अर्थ बदले बिना उसे दोबारा लिखिए: "{original_sentence}"
completion:
{paraphrased_sentence}
```
### Acknowledgement
Thank you, Ganesh Jagadeesan for helping with the preparation of this dataset by providing the Hindi translation of the above mentioned English prompts. |
luotr123/lora | ---
license: apache-2.0
---
|
justinian336/elsalvadorgram | ---
dataset_info:
features:
- name: image_src
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: link
dtype: string
splits:
- name: train
num_bytes: 2532132
num_examples: 1452
download_size: 1461097
dataset_size: 2532132
---
# Dataset Card for "elsalvadorgram"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wecover/OPUS_GlobalVoices | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- split: test
path: '*/*/test.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: test
path: '*/*am*/test.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: test
path: '*/*ar*/test.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: test
path: '*/*bn*/test.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: test
path: '*/*ca*/test.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: test
path: '*/*de*/test.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: test
path: '*/*el*/test.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: test
path: '*/*en*/test.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: test
path: '*/*es*/test.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: fa
data_files:
- split: train
path: '*/*fa*/train.parquet'
- split: test
path: '*/*fa*/test.parquet'
- split: valid
path: '*/*fa*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: test
path: '*/*fr*/test.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: test
path: '*/*hi*/test.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: test
path: '*/*hu*/test.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: test
path: '*/*id*/test.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: test
path: '*/*it*/test.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: test
path: '*/*mg*/test.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: test
path: '*/*mk*/test.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: test
path: '*/*my*/test.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: test
path: '*/*nl*/test.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: test
path: '*/*pl*/test.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: test
path: '*/*pt*/test.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: test
path: '*/*ru*/test.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: test
path: '*/*sr*/test.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: sw
data_files:
- split: train
path: '*/*sw*/train.parquet'
- split: test
path: '*/*sw*/test.parquet'
- split: valid
path: '*/*sw*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: test
path: '*/*tr*/test.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: test
path: '*/*ur*/test.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: zhs
data_files:
- split: train
path: '*/*zhs*/train.parquet'
- split: test
path: '*/*zhs*/test.parquet'
- split: valid
path: '*/*zhs*/valid.parquet'
- config_name: zht
data_files:
- split: train
path: '*/*zht*/train.parquet'
- split: test
path: '*/*zht*/test.parquet'
- split: valid
path: '*/*zht*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: test
path: '*/*bg*/test.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: test
path: '*/*cs*/test.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: test
path: '*/*da*/test.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: test
path: '*/*eo*/test.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: test
path: '*/*he*/test.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: test
path: '*/*km*/test.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: test
path: '*/*ko*/test.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: test
path: '*/*ku*/test.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: ne
data_files:
- split: train
path: '*/*ne*/train.parquet'
- split: test
path: '*/*ne*/test.parquet'
- split: valid
path: '*/*ne*/valid.parquet'
- config_name: or
data_files:
- split: train
path: '*/*or*/train.parquet'
- split: test
path: '*/*or*/test.parquet'
- split: valid
path: '*/*or*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: test
path: '*/*pa*/test.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: test
path: '*/*ro*/test.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: test
path: '*/*sq*/test.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: test
path: '*/*sv*/test.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
---
|
tyzhu/squad_qa_wrong_title_v5_full | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: correct_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 7596960
num_examples: 5070
- name: validation
num_bytes: 361864
num_examples: 300
download_size: 1530108
dataset_size: 7958824
---
# Dataset Card for "squad_qa_wrong_title_v5_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/CodeAlpaca-20k_standardized | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 7774454
num_examples: 60066
download_size: 0
dataset_size: 7774454
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "CodeAlpaca-20k_standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_EleutherAI__pythia-1.3b | ---
pretty_name: Evaluation run of EleutherAI/pythia-1.3b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [EleutherAI/pythia-1.3b](https://huggingface.co/EleutherAI/pythia-1.3b) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_EleutherAI__pythia-1.3b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-21T20:31:22.068379](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-1.3b/blob/main/results_2023-10-21T20-31-22.068379.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n\
\ \"em_stderr\": 0.0003314581465219287,\n \"f1\": 0.040563129194630954,\n\
\ \"f1_stderr\": 0.0011177096979539825,\n \"acc\": 0.29182616042743625,\n\
\ \"acc_stderr\": 0.008309831271227\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.0003314581465219287,\n\
\ \"f1\": 0.040563129194630954,\n \"f1_stderr\": 0.0011177096979539825\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \
\ \"acc_stderr\": 0.00272107657704166\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5737963693764798,\n \"acc_stderr\": 0.013898585965412338\n\
\ }\n}\n```"
repo_url: https://huggingface.co/EleutherAI/pythia-1.3b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_21T20_31_22.068379
path:
- '**/details_harness|drop|3_2023-10-21T20-31-22.068379.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-21T20-31-22.068379.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_21T20_31_22.068379
path:
- '**/details_harness|gsm8k|5_2023-10-21T20-31-22.068379.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-21T20-31-22.068379.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:01:09.572948.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:01:09.572948.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T15:01:09.572948.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_21T20_31_22.068379
path:
- '**/details_harness|winogrande|5_2023-10-21T20-31-22.068379.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-21T20-31-22.068379.parquet'
- config_name: results
data_files:
- split: 2023_07_19T15_01_09.572948
path:
- results_2023-07-19T15:01:09.572948.parquet
- split: 2023_10_21T20_31_22.068379
path:
- results_2023-10-21T20-31-22.068379.parquet
- split: latest
path:
- results_2023-10-21T20-31-22.068379.parquet
---
# Dataset Card for Evaluation run of EleutherAI/pythia-1.3b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/EleutherAI/pythia-1.3b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [EleutherAI/pythia-1.3b](https://huggingface.co/EleutherAI/pythia-1.3b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_EleutherAI__pythia-1.3b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-21T20:31:22.068379](https://huggingface.co/datasets/open-llm-leaderboard/details_EleutherAI__pythia-1.3b/blob/main/results_2023-10-21T20-31-22.068379.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219287,
"f1": 0.040563129194630954,
"f1_stderr": 0.0011177096979539825,
"acc": 0.29182616042743625,
"acc_stderr": 0.008309831271227
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.0003314581465219287,
"f1": 0.040563129194630954,
"f1_stderr": 0.0011177096979539825
},
"harness|gsm8k|5": {
"acc": 0.009855951478392721,
"acc_stderr": 0.00272107657704166
},
"harness|winogrande|5": {
"acc": 0.5737963693764798,
"acc_stderr": 0.013898585965412338
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Multimodal-Fatima/FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: image
dtype: image
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
- name: scores
sequence: float64
splits:
- name: fewshot_0_bs_16
num_bytes: 299298854.375
num_examples: 3333
- name: fewshot_1_bs_16
num_bytes: 300147792.375
num_examples: 3333
- name: fewshot_3_bs_16
num_bytes: 301863124.375
num_examples: 3333
download_size: 891924279
dataset_size: 901309771.125
---
# Dataset Card for "FGVC_Aircraft_test_facebook_opt_350m_Attributes_Caption_ns_3333"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/munakata_atsumi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of munakata_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of munakata_atsumi/棟方愛海 (THE iDOLM@STER: Cinderella Girls), containing 132 images and their tags.
The core tags of this character are `brown_hair, hair_bun, double_bun, purple_eyes, short_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 132 | 112.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 132 | 79.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 258 | 152.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 132 | 107.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 258 | 199.58 MiB | [Download](https://huggingface.co/datasets/CyberHarem/munakata_atsumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/munakata_atsumi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, blush, open_mouth, smile, solo, drooling, +_+, long_hair |
| 1 | 10 |  |  |  |  |  | 1girl, solo, dress, open_mouth, angel_wings, blush, smile, drooling, hairband, halo, choker, heart-shaped_pupils, white_gloves, looking_at_viewer |
| 2 | 6 |  |  |  |  |  | 1girl, looking_at_viewer, short_sleeves, smile, solo, blush, bracelet, hair_bow, heart, open_mouth, skirt, striped_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | smile | solo | drooling | +_+ | long_hair | dress | angel_wings | hairband | halo | choker | heart-shaped_pupils | white_gloves | looking_at_viewer | short_sleeves | bracelet | hair_bow | heart | skirt | striped_thighhighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:--------|:-------|:-----------|:------|:------------|:--------|:--------------|:-----------|:-------|:---------|:----------------------|:---------------|:--------------------|:----------------|:-----------|:-----------|:--------|:--------|:---------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | X | X | | | X | X | X | X | X | X | X | X | | | | | | |
| 2 | 6 |  |  |  |  |  | X | X | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X |
|
Myfyr/sv_corpora_parliament_processed | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 292351437
num_examples: 1892723
download_size: 0
dataset_size: 292351437
---
# Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
almontalvao/home_products_ads | ---
dataset_info:
features:
- name: name
dtype: string
- name: description
dtype: string
- name: ad
dtype: string
splits:
- name: train
num_bytes: 4987
num_examples: 10
download_size: 8185
dataset_size: 4987
---
Synthetic Dataset for Ad Generation using AI
Data generated using the following steps:
* Prompt GPT-3.5-turbo to create a list of 10 home products and their descriptions.
* Form into desired format `{"product" : "", "description" : ""}
* Then prompt GPT to create ads for each of the items.
Note: This data was not cleaned or verified manually. |
tomrb/minipileoflaw | ---
configs:
- config_name: acus_reports
data_files:
- split: train
path: "data/minipileoflaw_acus_reports_train.csv"
- split: valid
path: "data/minipileoflaw_acus_reports_valid.csv"
--- |
Taranosaurus/bash-org-archive.com | ---
license: unknown
task_categories:
- text-generation
language:
- en
tags:
- bash.org
- irc
- chat-archive
pretty_name: Bash.org Archive
size_categories:
- 10K<n<100K
dataset_info:
features:
- name: qid
dtype: string
- name: score
dtype: string
- name: quote
dtype: string
splits:
- name: train
num_bytes: 3548502
num_examples: 21092
download_size: 3548502
dataset_size: 21092
---
**Details**
This is an unofficial dataset of an archive mirror of Bash.org: https://bash-org-archive.com/
Bash.org was a website launched in 1999 dedicated to archiving funny quotes from IRC other chat platforms over the years.
It offers a look into jokes, memes, and often inappropriate content that was quite commonplace at the time.
This dataset has been cleaned with a custom parser, aiming to preserve the original format of the content.
The parquet file contains the following columns:
| qid | score | quote |
| --- | ----- | ----- |
| Quote ID | Score | Quote |
**Sample**
Quote ID: #54588
Score: 517
Quote:
```
<BalkanEmperorGlaug> Another flight attendant's comment on a
less than perfect landing: "We ask you to please remain
seated as Captain Kangaroo bounces us to the terminal."
<Schroe[Sleepies]> On my first flight ever, the captain came
on over the intercom -
<Schroe[Sleepies]> "We aren't really a flight crew ... but we
did stay at a Holiday Inn Express!"
<mike> :P
<mike> i wanna be a pilot
<mike> then i can come over the intercom, "What's this button
do...OH FUCKING SHIT!!"
<mike> then after panic ensues, "Just kidding!"
<Glaug-Eldare> "The weather at our destination is 50 degrees
with some broken clouds, but we'll try to have them fixed
before we arrive."
<Glaug-Eldare> Then you can get fired!
<Glaug-Eldare> =D
<mike> then i can get a job that's on the ground!
<Glaug-Eldare> And throw wrenches at planes' windows while
they're taking off?
<mike> i was thinking more along the lines of programming
``` |
HyperionHF/Anthropic-evals-persona | ---
license: cc-by-4.0
---
|
Babelscape/wikineural | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: wikineural-dataset
tags:
- structure-prediction
---
## Table of Contents
- [Description](#description)
- [Dataset Structure](#dataset-structure)
- [Additional Information](#additional-information)
## Dataset Card for WikiNEuRal dataset
## Dataset Description
- **Summary:** Training data for NER in 9 languages.
- **Repository:** [https://github.com/Babelscape/wikineural](https://github.com/Babelscape/wikineural)
- **Paper:** [https://aclanthology.org/wikineural](https://aclanthology.org/2021.findings-emnlp.215/)
- **Point of Contact:** [tedeschi@babelscape.com](tedeschi@babelscape.com)
## Description
- **Summary:** In a nutshell, WikiNEuRal consists in a novel technique which builds upon a multilingual lexical knowledge base (i.e., [BabelNet](https://babelnet.org/)) and transformer-based architectures (i.e., [BERT](https://arxiv.org/abs/1810.04805)) to produce high-quality annotations for multilingual NER. It shows consistent improvements of up to 6 span-based F1-score points against state-of-the-art alternative data production methods on common benchmarks for NER. We used this methodology to automatically generate training data for NER in 9 languages.
- **Repository:** [https://github.com/Babelscape/wikineural](https://github.com/Babelscape/wikineural)
- **Paper:** [https://aclanthology.org/wikineural](https://aclanthology.org/2021.findings-emnlp.215/)
- **Point of Contact:** [tedeschi@babelscape.com](tedeschi@babelscape.com)
## Dataset Structure
The data fields are the same among all splits.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
```
- `lang`: a `string` feature. Full list of language: Dutch (nl), English (en), French (fr), German (de), Italian (it), Polish (pl), Portugues (pt), Russian (ru), Spanish (es).
## Dataset Statistics
The table below shows the number of sentences, number of tokens and number of instances per class, for each of the 9 languages.
| Dataset Version | Sentences | Tokens | PER | ORG | LOC | MISC | OTHER |
| :------------- | -------------: | -------------: | -------------: | -------------: | -------------: | -------------: | -------------: |
| WikiNEuRal EN | 116k | 2.73M | 51k | 31k | 67k | 45k | 2.40M |
| WikiNEuRal ES | 95k | 2.33M | 43k | 17k | 68k | 25k | 2.04M |
| WikiNEuRal NL | 107k | 1.91M | 46k | 22k | 61k | 24k | 1.64M |
| WikiNEuRal DE | 124k | 2.19M | 60k | 32k | 59k | 25k | 1.87M |
| WikiNEuRal RU | 123k | 2.39M | 40k | 26k | 89k | 25k | 2.13M |
| WikiNEuRal IT | 111k | 2.99M | 67k | 22k | 97k | 26k | 2.62M |
| WikiNEuRal FR | 127k | 3.24M | 76k | 25k | 101k | 29k | 2.83M |
| WikiNEuRal PL | 141k | 2.29M | 59k | 34k | 118k | 22k | 1.91M |
| WikiNEuRal PT | 106k | 2.53M | 44k | 17k | 112k | 25k | 2.20M |
## Additional Information
- **Licensing Information**: Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
- **Citation Information**: Please consider citing our work if you use data and/or code from this repository.
```bibtex
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
```
- **Contributions**: Thanks to [@sted97](https://github.com/sted97) for adding this dataset.
|
lmqg/qg_ruquad | ---
license: cc-by-4.0
pretty_name: SberQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: deepset/germanquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [SberQuaD](https://huggingface.co/datasets/sberquad) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Russian (ru)
## Dataset Structure
An example of 'train' looks as follows.
```
{
'answer': 'известковыми выделениями сине-зелёных водорослей',
'question': 'чем представлены органические остатки?',
'sentence': 'Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных.'
'paragraph': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены..."
'sentence_answer': "Они представлены <hl> известковыми выделениями сине-зелёных водорослей <hl> , ход...",
'paragraph_answer': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены <hl> известковыми выделениям...",
'paragraph_sentence': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. <hl> Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных. <hl> Кроме..."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
| 45327 | 5036 |23936 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
ahmedabdelwahed/Medical_papers_title_and_abstract_NLP_dataset | ---
license: mit
---
Originally publised in [kaggle](https://www.kaggle.com/datasets/wolfmedal/medical-paper-title-and-abstract-dataset/data).
|
W1lson/test | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: Context
dtype: string
splits:
- name: train
num_bytes: 556
num_examples: 4
download_size: 2763
dataset_size: 556
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
pretty_name: SQuAD
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: squad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 79317110
num_examples: 87599
- name: validation
num_bytes: 10472653
num_examples: 10570
download_size: 35142551
dataset_size: 89789763 |
Sukanth07/abirate-english-quotes-transformed | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 502729
num_examples: 2508
download_size: 314682
dataset_size: 502729
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dev2bit/es2bash | ---
license: apache-2.0
task_categories:
- text-generation
language:
- es
tags:
- code
---
# ES2Bash
This dataset contains a collection of natural language requests (in Spanish) and their corresponding bash commands. The purpose of this dataset is to provide examples of requests and their associated bash commands to facilitate machine learning and the development of natural language processing systems related to command-line operations.
# Features
The dataset consists of two main features:
* Natural Language Request (ES): This feature contains natural language requests written in Spanish. The requests represent tasks or actions to be performed using command-line commands.
* Bash Command: This feature contains the bash commands associated with each natural language request. The bash commands represent the way to execute the requested task or action using the command line.
# Initial Commands
The dataset initially contains requests related to the following commands:
* cat: Requests involving reading text files.
* ls: Requests related to obtaining information about files and directories at a specific location.
* cd: Requests to change the current directory.
# Dataset Expansion
In addition to the initial commands mentioned above, there are plans to expand this dataset to include more common command-line commands. The expansion will cover a broader range of tasks and actions that can be performed using command-line operations.
Efforts will also be made to improve the existing examples and ensure that they are clear, accurate, and representative of typical requests that users may have when working with command lines.
# Request Statistics
In the future, statistical data will be provided on the requests present in this dataset. This data may include information about the distribution of requests in different categories, the frequency of use of different commands, and any other relevant analysis to better understand the usage and needs of command-line users.
# Request Collection Process
This dataset is the result of a combination of requests generated by language models and manually added requests. The requests generated by language models were based on existing examples and prior knowledge related to the usage of command lines. A manual review was then conducted to ensure the quality and relevance of the requests. |
hippocrates/OphthoSummarization_test | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 27249
num_examples: 30
- name: valid
num_bytes: 27249
num_examples: 30
- name: test
num_bytes: 27249
num_examples: 30
download_size: 59676
dataset_size: 81747
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
somewheresystems/dataclysm-pubmed | ---
license: apache-2.0
language:
- en
tags:
- pubmed
- medical
- medicine
- NIH
- science
pretty_name: dataclysm-arxiv
size_categories:
- 10M<n<100M
---
# DATACLYSM PATCH 0.0.4: PUBMED
## USE THE NOTEBOOK TO GET STARTED!
https://github.com/somewheresystems/dataclysm
# somewheresystems/dataclysm-pubmed
This dataset comprises of 35.7 million PubMed metadata entries including title and some (~69% with) abstracts, with two new columns added: title-embeddings and abstract-embeddings. These additional columns were generated using the bge-small-en-v1.5 embeddings model. The dataset was sourced from the PubMed Baseline as of December 12, 2023. https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/
# Embeddings Model
We used https://huggingface.co/BAAI/bge-small-en-v1.5 to embed the `title` and `abstract` fields.
## Contact
Please contact hi@dataclysm.xyz for inquiries. |
notzero/oasstdt | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: parent_id
dtype: string
- name: user_id
dtype: string
- name: created_date
dtype: string
- name: text
dtype: string
- name: role
dtype: string
- name: lang
dtype: string
- name: review_count
dtype: int64
- name: review_result
dtype: bool
- name: deleted
dtype: bool
- name: rank
dtype: float64
- name: synthetic
dtype: bool
- name: model_name
dtype: 'null'
- name: detoxify
struct:
- name: identity_attack
dtype: float64
- name: insult
dtype: float64
- name: obscene
dtype: float64
- name: severe_toxicity
dtype: float64
- name: sexual_explicit
dtype: float64
- name: threat
dtype: float64
- name: toxicity
dtype: float64
- name: message_tree_id
dtype: string
- name: tree_state
dtype: string
- name: emojis
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: labels
struct:
- name: count
sequence: int64
- name: name
sequence: string
- name: value
sequence: float64
splits:
- name: train
num_bytes: 81660237
num_examples: 84433
- name: validation
num_bytes: 589940
num_examples: 599
download_size: 26302655
dataset_size: 82250177
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
aben118/common_voice_13_0_hi_pseudo_labelled | ---
dataset_info:
config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: whisper_transcript
sequence: int64
splits:
- name: train
num_bytes: 653129.0
num_examples: 20
- name: validation
num_bytes: 653129.0
num_examples: 20
- name: test
num_bytes: 653129.0
num_examples: 20
download_size: 1940046
dataset_size: 1959387.0
configs:
- config_name: hi
data_files:
- split: train
path: hi/train-*
- split: validation
path: hi/validation-*
- split: test
path: hi/test-*
---
|
liaad/machine_translation_dataset_detokenized | ---
dataset_info:
- config_name: journalistic
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pt-PT
'1': pt-BR
splits:
- name: train
num_bytes: 1283261148
num_examples: 1845205
download_size: 864052343
dataset_size: 1283261148
- config_name: legal
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': PT-PT
'1': PT-BR
splits:
- name: train
num_bytes: 148927683
num_examples: 477903
download_size: 91110976
dataset_size: 148927683
- config_name: literature
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pt-PT
'1': pt-BR
splits:
- name: train
num_bytes: 55646572
num_examples: 225
download_size: 19697267
dataset_size: 55646572
- config_name: politics
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pt-PT
'1': pt-BR
splits:
- name: train
num_bytes: 367487667
num_examples: 14328
download_size: 200081078
dataset_size: 367487667
- config_name: social_media
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': pt-PT
'1': pt-BR
splits:
- name: train
num_bytes: 371972738
num_examples: 3074774
download_size: 266674007
dataset_size: 371972738
- config_name: web
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': PT-PT
'1': PT-BR
splits:
- name: train
num_bytes: 1372865174
num_examples: 279555
download_size: 705408533
dataset_size: 1372865174
configs:
- config_name: journalistic
data_files:
- split: train
path: journalistic/train-*
- config_name: legal
data_files:
- split: train
path: legal/train-*
- config_name: literature
data_files:
- split: train
path: literature/train-*
- config_name: politics
data_files:
- split: train
path: politics/train-*
- config_name: social_media
data_files:
- split: train
path: social_media/train-*
- config_name: web
data_files:
- split: train
path: web/train-*
---
|
runesc/lotr-book | ---
dataset_info:
features:
- name: text
sequence: string
splits:
- name: train
num_bytes: 66609
num_examples: 1
download_size: 37408
dataset_size: 66609
---
# Dataset Card for "lotr-book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
herutriana44/drugbank_drug_target_label_mapping_amino_acid_pair | ---
license: mit
---
|
CyberHarem/emanuele_pessagno_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of emanuele_pessagno/エマヌエーレ・ペッサーニョ/埃曼努埃尔·佩萨格诺 (Azur Lane)
This is the dataset of emanuele_pessagno/エマヌエーレ・ペッサーニョ/埃曼努埃尔·佩萨格诺 (Azur Lane), containing 13 images and their tags.
The core tags of this character are `long_hair, pink_hair, breasts, pink_eyes, bangs, hairband, hair_between_eyes, large_breasts, purple_eyes, ahoge, bow, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 13 | 21.48 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emanuele_pessagno_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 13 | 10.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emanuele_pessagno_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 28 | 21.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emanuele_pessagno_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 13 | 18.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emanuele_pessagno_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 28 | 32.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/emanuele_pessagno_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/emanuele_pessagno_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | 1girl, solo, looking_at_viewer, blush, cleavage, frills, long_sleeves, white_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | blush | cleavage | frills | long_sleeves | white_dress |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:-----------|:---------|:---------------|:--------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X |
|
vvigl3/daryaaa | ---
license: openrail
---
|
HuggingFaceM4/imagenet1k_support_1k_query_sets_part_4 | Invalid username or password. |
Sagicc/audio-lmb-ds | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 529807303.082
num_examples: 2493
download_size: 759351337
dataset_size: 529807303.082
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- sr
size_categories:
- 1K<n<10K
---
### Audio dataset from Library "Milutin Bojic" digital repository
This dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life.
Great thanks go to the great guys from [CLASSLA - CLARIN Knowledge Centre for South Slavic Languages](https://huggingface.co/classla)
[Nikola Ljubesic](https://huggingface.co/nljubesi) and [Peter Rupnik](https://huggingface.co/5roop) on help to adapt the code for HF publishing!
|
Legend0300/MBTI | ---
license: mit
---
|
mabounassif/foodsmarts_ingredient_phrase_ner | ---
dataset_info:
features:
- name: id
dtype: string
- name: input
dtype: string
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': B-QTY
'1': I-QTY
'2': B-UNIT
'3': I-UNIT
'4': B-NAME
'5': I-NAME
'6': B-COMMENT
'7': I-COMMENT
'8': B-OTHER
'9': I-OTHER
splits:
- name: train
num_bytes: 1885
num_examples: 16
download_size: 3819
dataset_size: 1885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-billsum-default-3fec5f-14625986 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
CyberHarem/nagatomi_hasumi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nagatomi_hasumi/長富蓮実 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of nagatomi_hasumi/長富蓮実 (THE iDOLM@STER: Cinderella Girls), containing 56 images and their tags.
The core tags of this character are `brown_hair, brown_eyes, short_hair, hairband, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 56 | 53.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatomi_hasumi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 56 | 40.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatomi_hasumi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 126 | 81.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatomi_hasumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 56 | 50.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatomi_hasumi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 126 | 97.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatomi_hasumi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nagatomi_hasumi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------|
| 0 | 19 |  |  |  |  |  | 1girl, solo, open_mouth, dress, blush, looking_at_viewer, :d, puffy_short_sleeves, breasts, flower, gloves |
| 1 | 6 |  |  |  |  |  | 1girl, smile, solo, card_(medium), character_name, flower_(symbol), gloves, earrings, microphone, star_(symbol) |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | open_mouth | dress | blush | looking_at_viewer | :d | puffy_short_sleeves | breasts | flower | gloves | smile | card_(medium) | character_name | flower_(symbol) | earrings | microphone | star_(symbol) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:--------|:--------|:--------------------|:-----|:----------------------|:----------|:---------|:---------|:--------|:----------------|:-----------------|:------------------|:-----------|:-------------|:----------------|
| 0 | 19 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | | | | | | | | | X | X | X | X | X | X | X | X |
|
zhangshuoming/c_x86_O0_exebench_numeric_2k_json_cleaned | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6048893.6625
num_examples: 925
download_size: 348560
dataset_size: 6048893.6625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c_x86_O0_exebench_numeric_2k_json_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hlillemark/flores200_eng_input_scaffolding_mix3_large_mt5 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 18579975878
num_examples: 20480000
- name: val
num_bytes: 3827042
num_examples: 5000
- name: test
num_bytes: 7670994
num_examples: 10000
download_size: 8884090645
dataset_size: 18591473914
---
# Dataset Card for "flores200_eng_input_scaffolding_mix3_large_mt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Yannael/orca_DPO_pairs_gpt3.5 | ---
dataset_info:
features:
- name: system
dtype: string
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: gpt35
dtype: string
splits:
- name: train
num_bytes: 1744863
num_examples: 500
download_size: 970463
dataset_size: 1744863
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ai4bharat/human-eval | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- hi
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1<n<100
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Airavata HumanEval
language_bcp47:
- hi-IN
dataset_info:
- config_name: human-eval
features:
- name: id
dtype: string
- name: intent
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 34114
num_examples: 50
download_size: 21873
dataset_size: 34114
configs:
- config_name: human-eval
data_files:
- split: test
path: data/test-*
---
# Airavata HumanEval Prompts
This benchmark contains a set of prompts written by real-users to evaluate LLMs on real-world tasks and test it for different abilities. We collect prompts for 5 abilities listed below:
- Long: Ability to generate long-form text like writing essays, speeches, reports, etc.
- Fact-Ops: Ability to give factual opinions and explanations like seeking recommendations, seeking advice, opinions, explanations, etc.
- Content: Ability to make content accessible like summarizations, layman explanations, etc
- Lang-Creativity: Ability to be creative in language like finding anagrams, rhyming words, vocabulary enhancement, etc
- Culture: Ability to answer questions related to Indian Culture.
For each ability we define a list of intents and domains which are provided to the users along with detailed instructions about what prompts are expected.
We recommend the readers to check out our [official blog post](https://ai4bharat.github.io/airavata) for more details.
## Citation
```bibtex
@misc{airavata2024,
title = {Introducing Airavata: Hindi Instruction-tuned LLM},
url = {https://ai4bharat.github.io/airavata},
author = {Jay Gala and Thanmay Jayakumar and Jaavid Aktar Husain and Aswanth Kumar and Mohammed Safi Ur Rahman Khan and Diptesh Kanojia and Ratish Puduppully and Mitesh Khapra and Raj Dabre and Rudra Murthy and Anoop Kunchukuttan},
month = {January},
year = {2024}
}
```
|
Nexdata/Sign_Language_Gestures_Recognition_Data | ---
license: cc-by-nc-4.0
---
## Description
180,718 Images - Sign Language Gestures Recognition Data. The data diversity includes multiple scenes, 41 static gestures, 95 dynamic gestures, multiple photographic angles, and multiple light conditions. In terms of data annotation, 21 landmarks, gesture types, and gesture attributes were annotated. This dataset can be used for tasks such as gesture recognition and sign language translation.
For more details, please refer to the link: https://www.nexdata.ai/datasets/980?source=Huggingface
## Data size
180,718 images, including 83,013 images of static gestures, 97,705 images of dynamic gestures
## Population distribution
the race distribution is Asian, the gender distribution is male and female, the age distribution is mainly young people and middle-aged people
## Collection environment
including indoor scenes and outdoor scenes
## Collection diversity
including multiple scenes, 41 static gestures, 95 dynamic gestures, multiple photographic angles, multiple light conditions
## Device
cellphone
## Data forma
the image data format is .jpg, the annotation file format is .json
## Collecting content
sign language gestures were collected in different scenes
## Annotation content
21 landmarks annotation (each landmark includes the attribute of visible or invisible), gesture type annotation, gesture attributes annotation (left hand or right hand)
## Accuracy
accuracy requirement: the point location errors in x and y directions are less than 3 pixels, which is considered as a qualified annotation; accuracy of landmark annotation: the annotation part (each landmark) is regarded as the unit, the accuracy rate shall be more than 95%
# Licensing Information
Commercial License |
NgVN/formal-meeting | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 72272595.0
num_examples: 300
- name: test
num_bytes: 44082454.0
num_examples: 183
download_size: 116263255
dataset_size: 116355049.0
---
# Dataset Card for "formal-meeting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johnbradley/Kydoimos | ---
license: mit
---
# Challenging Butterfly Image Dataset
This dataset was __intentionally created with bad practices__ to serve as a challenging dataset for educational purposes.
This data was created using a subset of the Hoyal Cuthill et al. dataset available at doi:10.5061/dryad.2hp1978.
Citations for the original dataset from which this was adapted and its accompanying paper:
* Hoyal Cuthill, Jennifer F. et al. (2019), Data from: Deep learning on butterfly phenotypes tests evolution’s oldest mathematical model, Dryad, Dataset, https://doi.org/10.5061/dryad.2hp1978.
* Hoyal Cuthill, Jennifer F. et al. (2019), Deep learning on butterfly phenotypes tests evolution’s oldest mathematical model, Science Advances, Article-journal, https://doi.org/10.1126/sciadv.aaw4967.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.